FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

week in security

Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, read about one of Microsoft’s largest Patch Tuesday updates ever, including fixes for 120 vulnerabilities and two zero-days. Also, learn about Trend Micro’s new integrations with Amazon Web Services (AWS).

 

Read on:

 

Microsoft Patches 120 Vulnerabilities, Two Zero-Days

This week Microsoft released fixes for 120 vulnerabilities, including two zero-days, in 13 products and services as part of its monthly Patch Tuesday rollout. The August release marks its third-largest Patch Tuesday update, bringing the total number of security fixes for 2020 to 862. “If they maintain this pace, it’s quite possible for them to ship more than 1,300 patches this year,” says Dustin Childs of Trend Micro’s Zero-Day Initiative (ZDI).

 

XCSSET Mac Malware: Infects Xcode Projects, Performs UXSS Attack on Safari, Other Browsers, Leverages Zero-day Exploits

Trend Micro has discovered an unusual infection related to Xcode developer projects. Upon further investigation, it was discovered that a developer’s Xcode project at large contained the source malware, which leads to a rabbit hole of malicious payloads. Most notable in our investigation is the discovery of two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 1

We’re all now living in a post-COVID-19 world characterized by uncertainty, mass home working and remote learning. To help you adapt to these new conditions while protecting what matters most, Trend Micro has developed a two-part blog series on ‘the new normal’. Part one identifies the scope and specific cyber-threats of the new normal. 

 

Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions

Trend Micro enhances agility and automation in cloud security through integrations with Amazon Web Services (AWS). Through this collaboration, Trend Micro Cloud One offers the broadest platform support and API integration to protect AWS infrastructure whether building with Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, AWS Fargate, containers, Amazon Simple Storage Service (Amazon S3), or Amazon Virtual Private Cloud (Amazon VPC) networking.

 

Shedding Light on Security Considerations in Serverless Cloud Architectures

The big shift to serverless computing is imminent. According to a 2019 survey, 21% of enterprises have already adopted serverless technology, while 39% are considering it. Trend Micro’s new research on serverless computing aims to shed light on the security considerations in serverless environments and help adopters in keeping their serverless deployments as secure as possible.

 

In One Click: Amazon Alexa Could be Exploited for Theft of Voice History, PII, Skill Tampering

Amazon’s Alexa voice assistant could be exploited to hand over user data due to security vulnerabilities in the service’s subdomains. The smart assistant, which is found in devices such as the Amazon Echo and Echo Dot — with over 200 million shipments worldwide — was vulnerable to attackers seeking user personally identifiable information (PII) and voice recordings.

 

New Attack Lets Hackers Decrypt VoLTE Encryption to Spy on Phone Calls

A team of academic researchers presented a new attack called ‘ReVoLTE,’ that could let remote attackers break the encryption used by VoLTE voice calls and spy on targeted phone calls. The attack doesn’t exploit any flaw in the Voice over LTE (VoLTE) protocol; instead, it leverages weak implementation of the LTE mobile network by most telecommunication providers in practice, allowing an attacker to eavesdrop on the encrypted phone calls made by targeted victims.

 

An Advanced Group Specializing in Corporate Espionage is on a Hacking Spree

A Russian-speaking hacking group specializing in corporate espionage has carried out 26 campaigns since 2018 in attempts to steal vast amounts of data from the private sector, according to new findings. The hacking group, dubbed RedCurl, stole confidential corporate documents including contracts, financial documents, employee records and legal records, according to research published this week by the security firm Group-IB.

 

Walgreens Discloses Data Breach Impacting Personal Health Information of More Than 72,000 Customers

The second-largest pharmacy chain in the U.S. recently disclosed a data breach that may have compromised the personal health information (PHI) of more than 72,000 individuals across the United States. According to Walgreens spokesman Jim Cohn, prescription information of customers was stolen during May protests, when around 180 of the company’s 9,277 locations were looted.

 

Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 2

The past few months have seen radical changes to our work and home life under the Coronavirus threat, upending norms and confining millions of American families within just four walls. In this context, it’s not surprising that more of us are spending an increasing portion of our lives online. In the final blog of this two-part series, Trend Micro discusses what you can do to protect your family, your data, and access to your corporate accounts.

 

What are your thoughts on Trend Micro’s tips to make your home cybersecurity and privacy stronger in the COVID-19-impacted world? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.

The post This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions appeared first on .

How to pick the best cyber range for your cybersecurity training needs and budget

Introduction Whether your organization is tired of being held back by the cybersecurity workforce skills gap or your management team has watched the worst that a cyberattack could do to a peer organization, the time has come to do something about it. One of the best decisions your organization can make is to explore how […]

The post How to pick the best cyber range for your cybersecurity training needs and budget appeared first on Infosec Resources.


How to pick the best cyber range for your cybersecurity training needs and budget was first posted on October 7, 2020 at 8:06 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at darren.dalasta@infosecinstitute.com

Cybersecurity Considerations in the Work-From-Home Era

Cyberthreat keywords

Note: This article originally appeared in Verisign’s Q3 2020 Domain Name Industry Brief.

Verisign is deeply committed to protecting our critical internet infrastructure from potential cybersecurity threats, and to keeping up to date on the changing cyber landscape. 

Over the years, cybercriminals have grown more sophisticated, adapting to changing business practices and diversifying their approaches in non-traditional ways. We have seen security threats continue to evolve in 2020, as many businesses have shifted to a work from home posture due to the COVID-19 pandemic. For example, the phenomenon of “Zoom-bombing” video meetings and online learning sessions had not been a widespread issue until, suddenly, it became one. 

As more people began accessing company applications and files over their home networks, IT departments implemented new tools and set new policies to find the right balance between protecting company assets and sensitive information, and enabling employees to be just as productive at home as they would be in the office. Even the exponential jump in the use of home-networked printers that might or might not be properly secured represented a new security consideration for some corporate IT teams. 

An increase in phishing scams accompanied this shift in working patterns. About a month after much of the global workforce began working from home in greater numbers, the Federal Bureau of Investigation (FBI) reported about a 300 percent to 400 percent spike in cybersecurity complaints received by its Internet Crime Complaint Center (IC3) each day. According to the International Criminal Police Organization (Interpol), “[o]f global cyber-scams, 59% are coming in the form of spear phishing.” These phishing campaigns targeted an array of sectors, such as healthcare and government agencies, by imitating health experts or COVID-related charities.

Proactive steps can help businesses improve their cybersecurity hygiene and guard against phishing scams. One of these steps is for companies to focus part of their efforts on educating employees on how to detect and avoid malicious websites in phishing emails. Companies can start by building employee understanding of how to identify the destination domain of a URL (Uniform Resource Locator – commonly referring to as “links”) embedded in an email that may be malicious. URLs can be complex and confusing and cybercriminals, who are well aware of that complexity, often use deceptive tactics within the URLs to mask the malicious destination domain. Companies can take proactive steps to inform their employees of these deceptive tactics and help them avoid malicious websites. Some of the most common tactics are described in Table 1 below.

Tactic What is it?
Combosquatting Adding words such as “secure,” “login” or “account” to a familiar domain name to trick users into thinking it is affiliated with the known domain name.
Typosquatting Using domain names that resemble a familiar name but incorporate common typographical mistakes, such as reversing letters or leaving out or adding a character.
Levelsquatting Using familiar names/domain names as part of a subdomain within a URL, making it difficult to discover the real destination domain.
Homograph attacks Using homograph, or lookalike, domain names, such as substituting the uppercase “I” or number “1” where a lowercase “L” should have been used, or using “é” instead of an “e.”
Misplaced domain Planting familiar domain names within the URL as a way of adding a familiar domain name into a complex-looking URL. The familiar domain name could be found in a path (after a “/”), as part of the additional parameters (after a “?”), as an anchor/fragment identifier (after a “#”) or in the HTTP credentials (before “@”).
URL-encoded characters Placing URL-encoded characters (%[code]), which are sometimes used in URL parameters, into the domain name itself.
Table 1. Common tactics used by cybercriminals to mask the destination domain.

Teaching users to find and understand the domain portion of the URL can have lasting and positive effects on an organization’s ability to avoid phishing links. By providing employees (and their families) with this basic information, companies can better protect themselves against cybersecurity issues such as compromised networks, financial losses and data breaches.

To learn more about what you can do to protect yourself and your business against possible cyber threats, check out the STOP. THINK. CONNECT. campaign online at https://www.stopthinkconnect.org. STOP. THINK. CONNECT. is a global online safety awareness campaign led by the National Cyber Security Alliance and in partnership with the Anti-Phishing Working Group to help all digital citizens stay safer and more secure online.

The post Cybersecurity Considerations in the Work-From-Home Era appeared first on Verisign Blog.

Black Friday and Cyber Monday – here’s what you REALLY need to do!

The world fills up with cybersecurity tips every year when Black Friday comes round. But what about the rest of the year?

Ongoing Community Work to Mitigate Domain Name System Security Threats

For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.

As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.

As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.

In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:

  • Determining the roles and responsibilities of the various service providers across the internet ecosystem;
  • Delineating categories of threats: content, infrastructure, illegal vs. harmful, etc.;
  • Understanding the precise operational and technical capabilities of various types of providers across the internet ecosystem;
  • Relationships, if any, that respective service providers have with individuals or entities responsible for creating and/or removing the illegal or abusive activity;
  • Role of third-party “trusted notifiers,” including government actors, that may play a role in identifying and reporting illegal and abusive behavior to the appropriate service provider;
  • Processes to ensure infrastructure providers can trust third-party notifiers to reliably identify and provide evidence of illegal or harmful content;
  • Promoting administrative and operational scalability in trusted notifier engagements;
  • Determining the necessary safeguards around liability, due process, and transparency to ensure domain name registrants have recourse when the DNS is used as a tool to police DNS security threats, particularly when related to content.
  • Supporting ICANN’s important and appropriate role in coordination and facilitation, particularly as a centralized source of data, tools, and resources to help and hold accountable those parties responsible for managing and maintaining the internet’s unique identifiers.
Figure 1: The Internet Ecosystem

Definitions of Online Abuse

To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.

  • DNS Security Threats: defined as being “composed of five broad categories of harmful activity [where] they intersect with the DNS: malware, botnets, phishing, pharming, and spam when [spam] serves as a delivery mechanism for those other forms of DNS Abuse.”
  • Infrastructure Abuse: a broader set of security threats that can impact the DNS itself – including denial-of-service / distributed denial-of-service (DoS / DDoS) attacks, DNS cache poisoning, protocol-level attacks, and exploitation of implementation vulnerabilities.
  • Illegal Content: content that is unlawful and hosted on websites that are accessed via domain names in the global DNS. Examples might include the illegal sale of controlled substances or the distribution of child sexual abuse material (CSAM), and proven intellectual property infringement.
  • Abusive Content: is content hosted on websites using the domain name infrastructure that is deemed “harmful,” either under applicable law or norms, which could include scams, fraud, misinformation, or intellectual property infringement, where illegality has yet to be established by a court of competent jurisdiction.

Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.

ICANN Organization’s Efforts on DNS Abuse

The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.

The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.

The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.

The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.

ICANN Community Inputs on DNS Abuse

As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.

Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.

During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.

In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.

Other Groups and Actors Focused on DNS Security

It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:

Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.

DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.

Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.

ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.

Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.

What Verisign is Doing Today

As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:

  • Monitoring for abuse: Protecting against abuse starts with knowing what is happening in our systems and services, in a timely manner, and being capable of detecting anomalous or abusive behavior, and then reacting to address it appropriately. Verisign works closely with a range of actors, including trusted notifiers, to help ensure our abuse mitigation actions are informed by sources with necessary subject matter expertise and procedural rigor.
  • Blocking and redirecting abusive domain names: Blocking certain domain names that have been identified by Verisign and/or trusted third parties as security threats, including botnets that leverage well-understood and characterized domain generation algorithms, helps us to protect our infrastructure and neutralize or otherwise minimize potential security and stability threats more broadly by remediating abuse enabled via domain names in our TLDs. For example, earlier this year, Verisign observed a botnet family that was responsible for such a disproportionate amount of total global DNS queries, we were compelled to act to remediate the botnet. This was referenced in Verisign’s Q1 2021 Domain Name Industry Brief Volume 18, Issue 2.
  • Avoiding disposable domain name registrations: While heavily discounted domain name pricing strategies may promote short-term sales, they may also attract a spectrum of registrants who might be engaged in abuse. Some security threats, including phishing and botnets, exploit the ability to register large numbers of ‘disposable’ domain names rapidly and cheaply. Accordingly, Verisign avoids marketing programs that would permit our TLDs to be characterized in this class of ‘disposable’ domains, that have been shown to attract miscreants and enable abusive behavior.
  • Maintaining a cooperative and responsive partnership with law enforcement and government agencies, and engagement with courts of relevant jurisdiction: To ensure the security, stability and resiliency of the DNS and the internet at large, we have developed and maintained constructive relationships with United States and international law enforcement and government agencies to assist in addressing imminent and ongoing substantial security threats to operational applications and critical internet infrastructure, as well as illegal activity associated with domain names.
  • Ensuring adherence of contractual obligations: Our contractual frameworks, including our registry policies and .com Registry-Registrar Agreements, help provide an effective legal framework that discourages abusive domain name registrations. We believe that fair and consistent enforcement of our policies helps to promote good hygiene within the registrar channel.
  • Entering into a binding Letter of Intent with ICANN that commits both parties to cooperate in taking a leadership role in combating security threats. This includes working with the ICANN community to determine the appropriate process for, and development and implementation of, best practices related to combating security threats; to educate the wider ICANN community about security threats; and support activities that preserve and enhance the security, stability and resiliency of the DNS. Verisign also made a substantial financial commitment in direct support of these important efforts.

Trusted Notifiers

An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.

Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.

Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.

Conclusion

Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.

The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.

Routing Without Rumor: Securing the Internet’s Routing System

colorful laptop

This article is based on a paper originally published as part of the Global Commission on the Stability of Cyberspace’s Cyberstability Paper Series, “New Conditions and Constellations in Cyber,” on Dec. 9, 2021.

The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.

Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.

To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.

Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.

The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”

The Border Gateway Protocol

Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.

BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.

Route Hijacks and Route Leaks

Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.

Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.

Resource Public Key Infrastructure

Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.

RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.

Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.

Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.

As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.

Conclusion

Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.

The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.

Ransomware Survey 2022 – like the Curate’s Egg, “good in parts”

You might not like the headline statistics in this year's ransomware report... but that makes it even more important to take a look!

World Password Day – the 1960s just called and gave you your passwords back

Yes, passwords are going away. No, it won't happen tomorrow. So it's still worth knowing the basics of picking proper passwords.

Hackers Increasingly Using Browser Automation Frameworks for Malicious Activities

Cybersecurity researchers are calling attention to a free-to-use browser automation framework that's being increasingly used by threat actors as part of their attack campaigns. "The framework contains numerous features which we assess may be utilized in the enablement of malicious activities," researchers from Team Cymru said in a new report published Wednesday. "The technical entry bar for the

Black Hat Asia 2022: Building the Network

In part one of this issue of our Black Hat Asia NOC blog, you will find: 

  • From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung 
  • Meraki MR, MS, MX and Systems Manager by Paul Fidler 
  • Meraki Scanning API Receiver by Christian Clasen 

Cisco Meraki was asked by Black Hat Events to be the Official Wired and Wireless Network Equipment, for Black Hat Asia 2022, in Singapore, 10-13 May 2022; in addition to providing the Mobile Device Management (since Black Hat USA 2021), Malware Analysis (since Black Hat USA 2016), & DNS (since Black Hat USA 2017) for the Network Operations Center. We were proud to collaborate with NOC partners Gigamon, IronNet, MyRepublic, NetWitness and Palo Alto Networks. 

To accomplish this undertaking in a few weeks’ time, after the conference had a green light with the new COVID protocols, Cisco Meraki and Cisco Secure leadership gave their full support to send the necessary hardware, software licenses and staff to Singapore. Thirteen Cisco engineers deployed to the Marina Bay Sands Convention Center, from Singapore, Australia, United States and United Kingdom; with two additional remote Cisco engineers from the United States.

From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung

Loops in the networking world are usually considered a bad thing. Spanning tree loops and routing loops happen in an instant and can ruin your whole day, but over the 2nd week in May, I made a different kind of loop. Twenty years ago, I first attended the Black Hat and Defcon conventions – yay Caesars Palace and Alexis Park – a wide-eyed tech newbie who barely knew what WEP hacking, Driftnet image stealing and session hijacking meant. The community was amazing and the friendships and knowledge I gained, springboarded my IT career.

In 2005, I was lucky enough to become a Senior Editor at Tom’s Hardware Guide and attended Black Hat as accredited press from 2005 to 2008. From writing about the latest hardware zero-days to learning how to steal cookies from the master himself, Robert Graham, I can say, without any doubt, Black Hat and Defcon were my favorite events of the year.

Since 2016, I have been a Technical Solutions Architect at Cisco Meraki and have worked on insanely large Meraki installations – some with twenty thousand branches and more than a hundred thousand access points, so setting up the Black Hat network should be a piece of cake right? Heck no, this is unlike any network you’ve experienced!

As an attendee and press, I took the Black Hat network for granted. To take a phrase that we often hear about Cisco Meraki equipment, “it just works”. Back then, while I did see access points and switches around the show, I never really dived into how everything was set up.

A serious challenge was to secure the needed hardware and ship it in time for the conference, given the global supply chain issues. Special recognition to Jeffry Handal for locating the hardware and obtaining the approvals to donate to Black Hat Events. For Black Hat Asia, Cisco Meraki shipped:

Let’s start with availability. iPads and iPhones are scanning QR codes to register attendees. Badge printers need access to the registration system. Training rooms all have their separate wireless networks – after all, Black Hat attendees get a baptism by fire on network defense and attack. To top it all off, hundreds of attendees gulped down terabytes of data through the main conference wireless network.

All this connectivity was provided by Cisco Meraki access points, switches, security appliances, along with integrations into SecureX, Umbrella and other products. We fielded a literal army of engineers to stand up the network in less than two days… just in time for the training sessions on May 10  to 13th and throughout the Black Hat Briefings and Business Hall on May 12 and 13.

Let’s talk security and visibility. For a few days, the Black Hat network is probably one of the most hostile in the world. Attendees learn new exploits, download new tools and are encouraged to test them out. Being able to drill down on attendee connection details and traffic was instrumental on ensuring attendees didn’t get too crazy.

On the wireless front, we made extensive use of our Radio Profiles to reduce interference by tuning power and channel settings. We enabled band steering to get more clients on the 5GHz bands versus 2.4GHz and watched the Location Heatmap like a hawk looking for hotspots and dead areas. Handling the barrage of wireless change requests – enable or disabling this SSID, moving VLANs (Virtual Local Area Networks), enabling tunneling or NAT mode, – was a snap with the Meraki Dashboard.

Shutting Down a Network Scanner

While the Cisco Meraki Dashboard is extremely powerful, we happily supported exporting of logs and integration in major event collectors, such as the NetWitness SIEM and even the Palo Alto firewall. On Thursday morning, the NOC team found a potentially malicious Macbook Pro performing vulnerability scans against the Black Hat management network. It is a balance, as we must allow trainings and demos connect to malicious websites, download malware and execute. However, there is a Code of Conduct to which all attendees are expected to follow and is posted at Registration with a QR code.

The Cisco Meraki network was exporting syslog and other information to the Palo Alto firewall, and after correlating the data between the Palo Alto Dashboard and Cisco Meraki client details page, we tracked down the laptop to the Business Hall.

We briefed the NOC management, who confirmed the scanning was violation of the Code of Conduct, and the device was blocked in the Meraki Dashboard, with the instruction to come to the NOC.

The device name and location made it very easy to determine to whom it belonged in the conference attendees.

A delegation from the NOC went to the Business Hall, politely waited for the demo to finish at the booth and had a thoughtful conversation with the person about scanning the network. 😊

Coming back to Black Hat as a NOC volunteer was an amazing experience.  While it made for long days with little sleep, I really can’t think of a better way to give back to the conference that helped jumpstart my professional career.

Meraki MR, MS, MX and Systems Manager by Paul Fidler

With the invitation extended to Cisco Meraki to provide network access, both from a wired and wireless perspective, there was an opportunity to show the value of the Meraki platform integration capabilities of Access Points (AP), switches, security appliances and mobile device management.

The first amongst this was the use of the Meraki API. We were able to import the list of MAC addresses of the Meraki MRs, to ensure that the APs were named appropriately and tagged, using a single source of truth document shared with the NOC management and partners, with the ability to update en masse at any time.

Floor Plan and Location Heatmap

On the first day of NOC setup, the Cisco team walked around the venue to discuss AP placements with the staff of the Marina Bay Sands. Whilst we had a simple Powerpoint showing approximate AP placements for the conference, it was noted that the venue team had an incredibly detailed floor plan of the venue. This was acquired in PDF and uploaded into the Meraki Dashboard; and with a little fine tuning, aligned perfectly with the Google Map.

Meraki APs were then placed physically in the venue meeting and training rooms, and very roughly on the floor plan. One of the team members then used a printout of the floor plan to mark accurately the placement of the APs. Having the APs named, as mentioned above, made this an easy task (walking around the venue notwithstanding!). This enabled accurate heatmap capability.

The Location Heatmap was a new capability for Black Hat NOC, and the client data visualized in NOC continued to be of great interest to the Black Hat management team, such as which training, briefing and sponsor booths drew the most interest.

SSID Availability

The ability to use SSID Availability was incredibly useful. It allowed ALL of the access points to be placed within a single Meraki Network. Not only that, because of the training events happening during the week, as well as TWO dedicated SSIDs for the Registration and lead tracking iOS devices (more of which later), one for initial provisioning (which was later turned off), and one for certificated based authentication, for a very secure connection.

Network Visibility

We were able to monitor the number of connected clients, network usage, the persons passing by the network and location analytics, throughout the conference days. We provided visibility access to the Black Hat NOC management and the technology partners (along with full API access), so they could integrate with the network platform.

Alerts

Meraki alerts are exactly that: the ability to be alerted to something that happens in the Dashboard. Default behavior is to be emailed when something happens. Obviously, emails got lost in the noise, so a web hook was created in SecureX orchestration to be able to consume Meraki alerts and send it to Slack (the messaging platform within the Black Hat NOC), using the native template in the Meraki Dashboard. The first alert to be created was to be alerted if an AP went down. We were to be alerted after five minutes of an AP going down, which is the smallest amount of time available before being alerted.

The bot was ready; however, the APs stayed up the entire time! 

Meraki Systems Manager

Applying the lessons learned at Black Hat Europe 2021, for the initial configuration of the conference iOS devices, we set up the Registration iPads and lead retrieval iPhones with Umbrella, Secure Endpoint and WiFi config. Devices were, as in London, initially configured using Apple Configurator, to both supervise and enroll the devices into a new Meraki Systems Manager instance in the Dashboard.

However, Black Hat Asia 2022 offered us a unique opportunity to show off some of the more integrated functionality.

System Apps were hidden and various restrictions (disallow joining of unknown networks, disallow tethering to computers, etc.) were applied, as well as a standard WPA2 SSID for the devices that the device vendor had set up (we gave them the name of the SSID and Password).

We also stood up a new SSID and turned-on Sentry, which allows you to provision managed devices with, not only the SSID information, but also a dynamically generated certificate. The certificate authority and radius server needed to do this 802.1x is included in the Meraki Dashboard automatically! When the device attempts to authenticate to the network, if it doesn’t have the certificate, it doesn’t get access. This SSID, using SSID availability, was only available to the access points in the Registration area.

Using the Sentry allowed us to easily identify devices in the client list.

One of the alerts generated with SysLog by Meraki, and then viewable and correlated in the NetWitness SIEM, was a ‘De Auth’ event that came from an access point. Whilst we had the IP address of the device, making it easy to find, because the event was a de auth, meaning 802.1x, it narrowed down the devices to JUST the iPads and iPhones used for registration (as all other access points were using WPA2). This was further enhanced by seeing the certificate name used in the de-auth:

Along with the certificate name was the name of the AP: R**

Device Location

One of the inherent problems with iOS device location is when devices are used indoors, as GPS signals just aren’t strong enough to penetrate modern buildings. However, because the accurate location of the Meraki access points was placed on the floor plan in the Dashboard, and because the Meraki Systems Manager iOS devices were in the same Dashboard organization as the access points, we got to see a much more accurate map of devices compared to Black Hat Europe 2021 in London.

When the conference Registration closed on the last day and the Business Hall Sponsors all returned their iPhones, we were able to remotely wipe all of the devices, removing all attendee data, prior to returning to the device contractor.

Meraki Scanning API Receiver by Christian Clasen

Leveraging the ubiquity of both WiFi and Bluetooth radios in mobile devices and laptops, Cisco Meraki’s wireless access points can detect and provide location analytics to report on user foot traffic behavior. This can be useful in retail scenarios where customers desire location and movement data to better understand the trends of engagement in their physical stores.

Meraki can aggregate real-time data of detected WiFi and Bluetooth devices and triangulate their location rather precisely when the floorplan and AP placement has been diligently designed and documented. At the Black Hat Asia conference, we made sure to properly map the AP locations carefully to ensure the highest accuracy possible.

This scanning data is available for clients whether they are associated with the access points or not. At the conference, we were able to get very detailed heatmaps and time-lapse animations representing the movement of attendees throughout the day. This data is valuable to conference organizers in determining the popularity of certain talks, and the attendance at things like keynote presentations and foot traffic at booths.

This was great for monitoring during the event, but the Dashboard would only provide 24-hours of scanning data, limiting what we could do when it came to long-term data analysis. Fortunately for us, Meraki offers an API service we can use to capture this treasure trove offline for further analysis. We only needed to build a receiver for it.

The Receiver Stack

The Scanning API requires that the customer stand up infrastructure to store the data, and then register with the Meraki cloud using a verification code and secret. It is composed of two endpoints:

  1. Validator

Returns the validator string in the response body

[GET] https://yourserver/

This endpoint is called by Meraki to validate the receiving server. It expects to receive a string that matches the validator defined in the Meraki Dashboard for the respective network.

  1. Receiver

Accepts an observation payload from the Meraki cloud

[POST] https://yourserver/

This endpoint is responsible for receiving the observation data provided by Meraki. The URL path should match that of the [GET] request, used for validation.

The response body will consist of an array of JSON objects containing the observations at an aggregate per network level. The JSON will be determined based on WiFi or BLE device observations as indicated in the type parameter.

What we needed was a simple technology stack that would contain (at minimum) a publicly accessible web server capable of TLS. In the end, the simplest implementation was a web server written using Python Flask, in a Docker container, deployed in AWS, connected through ngrok.

In fewer than 50 lines of Python, we could accept the inbound connection from Meraki and reply with the chosen verification code. We would then listen for the incoming POST data and dump it into a local data store for future analysis. Since this was to be a temporary solution (the duration of the four-day conference), the thought of registering a public domain and configuring TLS certificates wasn’t particularly appealing. An excellent solution for these types of API integrations is ngrok (https://ngrok.com/). And a handy Python wrapper was available for simple integration into the script (https://pyngrok.readthedocs.io/en/latest/index.html).

We wanted to easily re-use this stack next time around, so it only made sense to containerize it in Docker. This way, the whole thing could be stood up at the next conference, with one simple command. The image we ended up with would mount a local volume, so that the ingested data would remain persistent across container restarts.

Ngrok allowed us to create a secure tunnel from the container that could be connected in the cloud to a publicly resolvable domain with a trusted TLS certificate generated for us. Adding that URL to the Meraki Dashboard is all we needed to do start ingesting the massive treasure trove of location data from the Aps – nearly 1GB of JSON over 24 hours.

This “quick and dirty” solution illustrated the importance of interoperability and openness in the technology space when enabling security operations to gather and analyze the data they require to monitor and secure events like Black Hat, and their enterprise networks as well. It served us well during the conference and will certainly be used again going forward.

Check out part two of the blog, Black Hat Asia 2022 Continued: Cisco Secure Integrations, where we will discuss integrating NOC operations and making your Cisco Secure deployment more effective:

  • SecureX: Bringing Threat Intelligence Together by Ian Redden
  • Device type spoofing event by Jonny Noble
  • Self Service with SecureX Orchestration and Slack by Matt Vander Horst
  • Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar
  • Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan
  • Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).

About Black Hat

For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Black Hat Asia 2022 Continued: Cisco Secure Integrations

In part one of our Black Hat Asia 2022 NOC blog, we discussed building the network with Meraki: 

  • From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung 
  • Meraki MR, MS, MX and Systems Manager by Paul Fidler 
  • Meraki Scanning API Receiver by Christian Clasen 

In this part two, we will discuss:  

  • SecureX: Bringing Threat Intelligence Together by Ian Redden 
  • Device type spoofing event by Jonny Noble 
  • Self Service with SecureX Orchestration and Slack by Matt Vander Horst 
  • Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar 
  • Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan 
  • Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum 

SecureX: Bringing Threat Intelligence Together by Ian Redden 

In addition to the Meraki networking gear, Cisco Secure also shipped two Umbrella DNS virtual appliances to Black Hat Asia, for internal network visibility with redundancy, in addition to providing: 

Cisco Secure Threat Intelligence (correlated through SecureX)

Donated Partner Threat Intelligence (correlated through SecureX)

Open-Source Threat Intelligence (correlated through SecureX)

Continued Integrations from past Black Hat events

  • NetWitness PCAP file carving and submission to Cisco Secure Malware Analytics (formerly Threat Grid) for analysis

New Integrations Created at Black Hat Asia 2022

  • SecureX threat response and NetWitness SIEM: Sightings in investigations
  • SecureX orchestration workflows for Slack that enabled:
    • Administrators to block a device by MAC address for violating the conference Code of Conduct
    • NOC members to query Meraki for information about network devices and their clients
    • NOC members to update the VLAN on a Meraki switchport
    • NOC members to query Palo Alto Panorama for client information
    • Notification if an AP went down
  • NetWitness SIEM integration with Meraki syslogs
  • Palo Alto Panorama integration with Meraki syslogs
  • Palo Alto Cortex XSOAR integration with Meraki and Umbrella

Device type spoofing event by Jonny Noble

Overview

During the conference, a NOC Partner informed us that they received an alert from May 10 concerning an endpoint client that accessed two domains that they saw as malicious:

  • legendarytable[.]com
  • drakefollow[.]com

Client details from Partner:

  • Private IP: 10.XXX.XXX.XXX
  • Client name: LAPTOP-8MLGDXXXX
  • MAC: f4:XX:XX:XX:XX:XX
  • User agent for detected incidents: Mozilla/5.0 (iPhone; CPU iPhone OS 11_1_2 like Mac OS X) AppleWebKit/602.2.8 (KHTML, like Gecko) Version/11.0 Mobile/14B55c Safari/602.1

Based on the user agent, the partner derived that the device type was an Apple iPhone.

SecureX analysis

  • legendarytable[.]com à Judgement of Suspicious by alphaMountain.ai
  • drakefollow[.]com à Judgement of Malicious by alphaMountain.ai

Umbrella Investigate analysis

Umbrella Investigate positions both domains as low risk, both registered recently in Poland, and both hosted on the same IP:

Despite the low-risk score, the nameservers have high counts of malicious associated domains:

Targeting users in ASA, UK, and Nigeria:

Meraki analysis

Based on the time of the incident, we can trace the device’s location (based on its IP address). This is thanks to the effort we invested in mapping out the exact location of all Meraki APs, which we deployed across the convention center with an overlay of the event map covering the area of the event:

  • Access Point: APXX
  • Room: Orchid Ballroom XXX
  • Training course at time in location: “Web Hacking Black Belt Edition”

Further analysis and conclusions

The device name (LAPTOP-8MLGXXXXXX) and MAC address seen (f4:XX:XX:XX:XX:XX) both matched across the partner and Meraki, so there was no question that we were analyzing the same device.

Based on the useragent captured by the partner, the device type was an Apple iPhone. However, Meraki was reporting the Device and its OS as “Intel, Android”

A quick look up for the MAC address confirmed that the OUI (organizationally unique identifier) for f42679 was Intel Malaysia, making it unlikely that this was an Apple iPhone.

The description for the training “Web Hacking Black Belt Edition” can be seen here:

https://www.blackhat.com/asia-22/training/schedule/#web-hacking-black-belt-edition–day-25388

It is highly likely that the training content included the use of tools and techniques for spoofing the visibility of useragent or device type.

There is also a high probability that the two domains observed were used as part of the training activity, rather than this being part of a live attack.

It is clear that integrating the various Cisco technologies (Meraki wireless infrastructure, SecureX, Umbrella, Investigate) used in the investigation of this incident, together with the close partnership and collaboration of our NOC partners, positioned us where we needed to be and provided us with the tools we needed to swiftly collect the data, join the dots, make conclusions, and successfully bring the incident to closure.

Self Service with SecureX Orchestration and Slack by Matt Vander Horst

Overview

Since Meraki was a new platform for much of the NOC’s staff, we wanted to make information easier to gather and enable a certain amount of self-service. Since the Black Hat NOC uses Slack for messaging, we decided to create a Slack bot that NOC staff could use to interact with the Meraki infrastructure as well as Palo Alto Panorama using the SecureX Orchestration remote appliance. When users communicate with the bot, webhooks are sent to Cisco SecureX Orchestration to do the work on the back end and send the results back to the user.

Design

Here’s how this integration works:

  1. When a Slack user triggers a ‘/’ “slash command” or other type of interaction, a webhook is sent to SecureX Orchestration. Webhooks trigger orchestration workflows which can do any number of things. In this case, we have two different workflows: one to handle slash commands and another for interactive elements such as forms (more on the workflows later).
  2. Once the workflow is triggered, it makes the necessary API calls to Meraki or Palo Alto Panorama depending on the command issued.
  3. After the workflow is finished, the results are passed back to Slack using either an API request (for slash commands) or webhook (for interactive elements).
  4. The user is presented with the results of their inquiry or the action they requested.

Workflow #1: Handle Slash Commands

Slash commands are a special type of message built into Slack that allow users to interact with a bot. When a Slack user executes a slash command, the command and its arguments are sent to SecureX Orchestration where a workflow handles the command. The table below shows a summary of the slash commands our bot supported for Black Hat Asia 2022:

Here’s a sample of a portion of the SecureX Orchestration workflow that powers the above commands:

And here’s a sample of firewall logs as returned from the “/pan_traffic_history” command:

Workflow #2: Handle Interactivity

A more advanced form of user interaction comes in the form of Slack blocks. Instead of including a command’s arguments in the command itself, you can execute the command and Slack will present you with a form to complete, like this one for the “/update_vlan” command:

These forms are much more user friendly and allow information to be pre-populated for the user. In the example above, the user can simply select the switch to configure from a drop-down list instead of having to enter its name or serial number. When the user submits one of these forms, a webhook is sent to SecureX Orchestration to execute a workflow. The workflow takes the requested action and sends back a confirmation to the user:

Conclusion

While these two workflows only scratched the surface of what can be done with SecureX Orchestration webhooks and Slack, we now have a foundation that can be easily expanded upon going forward. We can add additional commands, new forms of interactivity, and continue to enable NOC staff to get the information they need and take necessary action. The goal of orchestration is to make life simpler, whether it is by automating our interactions with technology or making those interactions easier for the user. 

Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan

Since 2017 (starting in Black Hat USA – Las Vegas), Cisco Umbrella has provided DNS security to the Black Hat attendee network, added layers of traffic visibility previously not seen. Our efforts have largely been successful, identifying thousands of threats over the years and mitigating them via Umbrella’s blocking capabilities when necessary. This was taken a step further at Black Hat London 2021, where we introduced our Virtual Appliances to provide source IP attribution to the devices making requests.

 

 

Here at Black Hat Asia 2022, we’ve been noodling on additional ways to provide advanced protection for future shows, and it starts with Umbrella’s Cloud Application Discovery’s feature, which identified 2,286 unique applications accessed by users on the attendee network across the four-day conference.  Looking at a snapshot from a single day of the show, Umbrella captured 572,282 DNS requests from all cloud apps, with over 42,000 posing either high or very high risk.

Digging deeper into the data, we see not only the types of apps being accessed…

…but also see the apps themselves…

…and we can flag apps that look suspicious.

We also include risk downs breaks by category…

…and drill downs on each.

While this data alone won’t provide enough information to take action, including this data in analysis, something we have been doing, may provide a window into new threat vectors that may have previously gone unseen. For example, if we identify a compromised device infected with malware or a device attempting to access things on the network that are restricted, we can dig deeper into the types of cloud apps those devices are using and correlate that data with suspicious request activity, potential uncovering tools we should be blocking in the future.

I can’t say for certain how much this extra data set will help us uncover new threats, but, with Black Hat USA just around the corner, we’ll find out soon.

Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar

From five years ago to now, Cisco has tremendously expanded our presence at Black Hat to include a multitude of products. Of course, sign-on was simple when it was just one product (Secure Malware Analytics) and one user to log in. When it came time to add a new technology to the stack it was added separately as a standalone product with its own method of logging in. As the number of products increased, so did the number of Cisco staff at the conference to support these products. This means sharing usernames and passwords became tedious and not to mention insecure, especially with 15 Cisco staff, plus partners, accessing the platforms.

The Cisco Secure stack at Black Hat includes SecureX, Umbrella, Malware Analytics, Secure Endpoint (iOS clarity), and Meraki. All of these technologies support using SAML SSO natively with SecureX sign-on. This means that each of our Cisco staff members can have an individual SecureX sign-on account to log into the various consoles. This results in better role-based access control, better audit logging and an overall better login experience. With SecureX sign-on we can log into all the products only having to type a password one time and approve one Cisco DUO Multi-Factor Authentication (MFA) push.

How does this magic work behind the scenes? It’s actually rather simple to configure SSO for each of the Cisco technologies, since they all support SecureX sign-on natively. First and foremost, you must set up a new SecureX org by creating a SecureX sign-on account, creating a new organization and integrating at least one Cisco technology. In this case I created a new SecureX organization for Black Hat and added the Secure Endpoint module, Umbrella Module, Meraki Systems Manager module and the Secure Malware Analytics module. Then from Administration à Users in SecureX, I sent an invite to the Cisco staffers that would be attending the conference, which contained a link to create their account and join the Blackhat SecureX organization. Next let’s take a look at the individual product configurations.

Meraki:

In the Meraki organization settings enable SecureX sign-on. Then under Organization à Administrators add a new user and specify SecureX sign-on as the authentication method. Meraki even lets you limit users to particular networks and set permission levels for those networks. Accepting the email invitation is easy since the user should already be logged into their SecureX sign-on account. Now, logging into Meraki only requires an email address and no password or additional DUO push.

Umbrella:

Under Admin à Authentication configure SecureX sign-on which requires a test login to ensure you can still login before using SSO for authentication to Umbrella. There is no need to configure MFA in Umbrella since SecureX sign-on comes with built in DUO MFA. Existing users and any new users added in Umbrella under Admin à Accounts will now be using SecureX sign-on to login to Umbrella. Logging into Umbrella is now a seamless launch from the SecureX dashboard or from the SecureX ribbon in any of the other consoles.

Secure Malware Analytics:

A Secure Malware Analytics organization admin can create new users in their Threat Grid tenant. This username is unique to Malware Analytics, but it can be connected to a SecureX sign-on account to take advantage of the seamless login flow. From the email invitation the user will create a password for their Malware Analytics user and accept the EULA. Then in the top right under My Malware Analytics Account, the user has an option to connect their SecureX sign-on account which is a one click process if already signed in with SecureX sign-on. Now when a user navigates to Malware Analytics login page, simply clicking “Login with SecureX Sign-On” will grant them access to the console.

 

Secure Endpoint:

The Secure Endpoint deployment at Blackhat is limited to IOS clarity through Meraki Systems Manager for the conference IOS devices. Most of the asset information we need about the iPhones/iPads is brought in through the SecureX Device Insights inventory. However, for initial configuration and to view device trajectory it is required to log into Secure Endpoint. A new Secure Endpoint account can be created under Accounts à Users and an invite is sent to corresponding email address. Accepting the invite is a smooth process since the user is already signed in with SecureX sign-on. Privileges for the user in the Endpoint console can be granted from within the user account.

Conclusion:

To sum it all up, SecureX sign-on is the standard for the Cisco stack moving forward. With a new SecureX organization instantiated using SecureX sign-on any new users to the Cisco stack at Black Hat will be using SecureX sign-on. SecureX sign-on has helped our user management be much more secure as we have expanded our presence at Black Hat. SecureX sign-on provides a unified login mechanism for all the products and modernized our login experience at the conference.

Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum

I’d gotten used to people’s reactions upon seeing SecureX in use for the first time. A few times at Black Hat, a small audience gathered just to watch us effortlessly correlate data from multiple threat intelligence repositories and several security sensor networks in just a few clicks in a single interface for rapid sequencing of events and an intuitive understanding of security events, situations, causes, and consequences. You’ve already read about a few of these instances above. Here is just one example of SecureX automatically putting together a chronological history of observed network events detected by products from two vendors (Cisco Umbrella and NetWitness) . The participation of NetWitness in this and all of our other investigations was made possible by our open architecture, available APIs and API specifications, and the creation of the NetWitness module described above.

In addition to the traffic and online activities of hundreds of user devices on the network, we were responsible for monitoring a handful of Black Hat-owned devices as well. Secure X Device Insights made it easy to access information about these assets, either en masse or as required during an ongoing investigation. iOS Clarity for Secure Endpoint and Meraki System Manager both contributed to this useful tool which adds business intelligence and asset context to SecureX’s native event and threat intelligence, for more complete and more actionable security intelligence overall.

SecureX is made possible by dozens of integrations, each bringing their own unique information and capabilities. This time though, for me, the star of the SecureX show was our malware analysis engine, Cisco Secure Malware Analytics (CSMA). Shortly before Black Hat Asia, the CSMA team released a new version of their SecureX module. SecureX can now query CSMA’s database of malware behavior and activity, including all relevant indicators and observables, as an automated part of the regular process of any investigation performed in SecureX Threat Response.

This capability is most useful in two scenarios:

1: determining if suspicious domains, IPs and files reported by any other technology had been observed in the analysis of any of the millions of publicly submitted file samples, or our own.
2: rapidly gathering additional context about files submitted to the analysis engine by the integrated products in the Black Hat NOC.

The first was a significant time saver in several investigations. In the example below, we received an alert about connections to a suspicious domain. In that scenario, our first course of action is to investigate the domain and any other observables reported with it (typically the internal and public IPs included in the alert). Due to the new CSMA module, we immediately discovered that the domain had a history of being contacted by a variety of malware samples, from multiple families, and that information, corroborated by automatically gathered reputation information from multiple sources about each of those files, gave us an immediate next direction to investigate as we hunted for evidence of those files being present in network traffic or of any traffic to other C&C resources known to be used by those families. From the first alert to having a robust, data-driven set of related signals to look for, took only minutes, including from SecureX partner Recorded Future, who donated a full threat intelligence license for the Black Hat NOC.

The other scenario, investigating files submitted for analysis, came up less frequently but when it did, the CSMA/SecureX integration was equally impressive. We could rapidly, nearly immediately, look for evidence of any of our analyzed samples in the environment across all other deployed SecureX-compatible technologies. That evidence was no longer limited to searching for the hash itself, but included any of the network resources or dropped payloads associated with the sample as well, easily identifying local targets who had not perhaps seen the exact variant submitted, but who had nonetheless been in contact with that sample’s Command and Control infrastructure or other related artifacts.

And of course, thanks to the presence of the ribbon in the CSMA UI, we could be even more efficient and do this with multiple samples at once.

SecureX greatly increased the efficiency of our small volunteer team, and certainly made it possible for us to investigate more alerts and events, and hunt for more threats, all more thoroughly, than we would have been able to without it. SecureX truly took this team to the next level, by augmenting and operationalizing the tools and the staff that we had at our disposal.

We look forward to seeing you at Black Hat USA in Las Vegas, 6-11 August 2022!

Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.

Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).

About Black Hat

For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.

Labtainers - A Docker-based Cyber Lab Framework


Labtainers include more than 50 cyber lab exercises and tools to build your own. Import a single VM appliance or install on a Linux system and your students are done with provisioning and administrative setup, for these and future lab exercises.

  • Consistent lab execution environments and automated provisioning via Docker containers
  • Multi-component network topologies on a modestly performing laptop computer
  • Automated assessment of student lab activity and progress
  • Individualized lab exercises to discourage sharing solutions

Labtainers provide controlled and consistent execution environments in which students perform labs entirely within the confines of their computer, regardless of the Linux distribution and packages installed on the student's computer. Labtainers run on our [VM appliance][vm-appliancee], or on any Linux with Dockers installed. And Labtainers is available as cloud-based VMs, e.g., on Azure as described in the Student Guide.

See the Student Guide for installation and use, and the Instructor Guide for student assessment. Developing and customizing lab exercises is described in the Designer Guide. See the Papers for additional information about the framework. The Labtainers website, and downloads (including VM appliances with Labtainers pre-installed) are at https://nps.edu/web/c3o/labtainers.

Distribution created: 03/25/2022 09:37
Revision: v1.3.7c
Commit: 626ea075
Branch: master


Distribution and Use

Please see the licensing and distribution information in the docs/license.md file.

Guide to directories

  • scripts/labtainers-student -- the work directory for running and testing student labs. You must be in that directory to run student labs.

  • scripts/labtainers-instructor -- the work directory for running and testing automated assessment and viewing student results.

  • labs -- Files specific to each of the labs

  • setup_scripts -- scripts for installing Labtainers and Docker and updating Labtainers

  • docs -- latex source for the labdesigner.pdf, and other documentation.

  • UI -- Labtainers lab editor source code (Java).

  • headless-lite -- scripts for managing Docker Workstation and cloud instances of Labtainers (systems that do not have native X11 servers.)

  • scripts/designer -- Tools for building new labs and managing base Docker images.

  • config -- system-wide configuration settings (these are not the lab-specific configuration settings.

  • distrib -- distribution support scripts, e.g., for publishing labs to the Docker hub.

  • testsets -- Test procedures and expected results. (Per-lab drivers for SimLab are not distributed).

  • pkg-mirrors -- utility scripts for internal NPS package mirroring to reduce external package pulling during tests and distribution.

Support

Use the GitHub issue reports, or email me at mfthomps@nps.edu

Also see https://my.nps.edu/web/c3o/support1

Release notes

The standard Labtainers distribution does not include files required for development of new labs. For those, run ./update-designer.sh from the labtainer/trunk/setup_scripts directory.

The installation script and the update-designer.sh script set environment variables, so you may want to logout/login, or start a new bash shell before using Labtainers the first time.

March 23, 2022

  • Fix path to tap lock directory; was causing failure of labs using network taps
  • Update plc-traffic netmon computer to have openjfx needed for new grassmarlin in java environment
  • Speed up lab startup by avoiding chown -R, which is very slow in docker.
  • Another shot at avoiding deletion of the X11 link in container /tmp directory.
  • Fix webtrack counting of sites visited and remove live-headers goal, that tool is no longer available. Clarified some lab manual steps.

March 2, 2022

  • Add new ssh-tunnel lab (thanks GWD!)
  • Fix labedit failure to reflect X11 value set by new_lab_setup
  • Add option to not parameterize a container

February 23, 2022

  • labedit was corrupting start.config after addition of new containers
  • Incorrect path to student guide in the student README file; dynamically change for cloud configs
  • Incorrect extension to update-labtainer.sh
  • Msc guide enahancements
  • Update the ghidra lab to include version 10.1.2 of Ghidra

February 15, 2022

  • Revert Azure cloud support to provision for each student. Azure discourages sharing resources.

January 24, 2022

  • Azure cloud now uses image stored in an Azure blob instead of provisioning for each student.
  • Added support for Google Cloud.

January 19, 2022

  • Introduce Labtainers on the Azure cloud. See the Student Guide for details on how to use this.

January 3, 2022

  • Revise setuid-env lab to add better assessment; simlab testing and avoid sighup in the printenv child.
  • Fix assessment goal count directive to exclude result tag values of false.
  • Do not require labname when using gradelab -a with a grader started with the debug option.
  • Revise capinout (stdin/stdout mirroring) to handle orphaning of command process children, improved documentation and error handling.
  • Added display of progress bars of docker images being pulled when a lab is first run.
  • User feedback on progress of container initialization.
  • The pcap-lib lab was missing a notify file needed for automated assessment; Remove extraneous step from Lab Manual.

November 23, 2021

  • Disable ubuntu popup errors on test VM.
  • Fix handling of different DISPLAY variable formats.

October 22, 2021

  • Revise the tcpip lab guide to note a successful syn-flood attack is not possible. Fix its automated assessment and add SimLab scripts.
  • Change artifact file extension from zip to lab, and add a preamble to confuse GUI file managers. Students were opening the zip and submitting its guts.
  • Make the -r option to gradelab the default, add a -c option for cumulative use of grader.
  • Modify refresh_mirror to refer to the local release date to avoid frequent queries of DockerHub. Each such query counts as an image pull, and they are now trying to monetize those.

September 30, 2021

  • Change bufoverflow lab guide and grading to not expect success with ASLR turned on, assess whether it was run.
  • Error handling for web grader for cases where student lacks results.
  • Print warning when deprecated lab is run.
  • Change formatstring grading to remove unused "_leaked_secret" description and clarify value of leaked_no_scanf.
  • Also change formatstring grading to allow any name for the vulnerable executable.

September 29, 2021

  • Gradelab error handling, reduce instances of crashes due to bad zip files.
  • Limit stdout artifact files to 1MB

September 17, 2021

  • Ghidra lab guide had wrong IP address, was not remade from source.

September 14, 2021

  • Example labs for LDAP and Mariadb using SSL. Intended as templates for new labs.
  • Handle Mariadb log format
  • Add per-container parameters to limit CPU use or pin container to CPU set.
  • Labpack creation now available via a GUI (makepackui).
  • Tab completion for the labtainer, labpack and gradelab commands.
  • New parallel computing lab ``parallel'' using MPI.

August 3, 2021

  • Add a "WAIT_FOR" configuration option to cause a container to delay parameterization until another container completes its parameterization.
  • Support for Mariadb log formats in results parsing
  • Remove support for Mac and Windows use of Docker Desktop. That product is too unstable for us to support.
  • Supress stderr messages when user uses built-in bash commands such as "which".
  • Bug fixes to makepack/labpack programs.

July 19, 2021

  • Add a DNS lab to introduce the DNS protocol and configuration.
  • Revised VirtualBox appliance image to start with the correct update script.
  • Split resolv.conf nameserver parameter out of the lab_gw configuration field into its own value.
  • IModule command failed if run before any labs had been started.

July 5, 2021

  • Errors in DISPLAY env variable management broke GUI applications on Docker Desktop.

July 1, 2021

  • Support Mac package installation of headless Labtainers.
  • The routing-basics lab automated assessment failed due to lack of treataslocal files
  • Correct typos and incorrect addresses in routing-basics lab, and fix automated assessment.
  • Assessment of pcapanalysis was failing.

June 10, 2021

  • All lab manual PDFs are now in the github repo
  • Convert vpnlab and vpnlab2 instructions to PDF lab manuals.

May 25, 2021

  • Add searchable keywords to each lab. See "labtainer -h" for usage.
  • Expand routing-basics lab and lab manual
  • Remove routing-basics2 lab, it is now redundant.
  • sudo on some containers failed because hostnames remove underscores, leading to mismatch with the hosts file. Fix with extra entry in the hosts file with container name sans underscore.
  • New Labpack feature to package a collection of labs, and makepack tool to create Labpacks.
  • Error check for /sbin directory when using ubuntu20 -- would be silently fatal.
  • New network-basics lab

May 5, 2021

  • Introduce a new users lab to introduce user/group management
  • Surpress Apparmor host messages in centos container syslogs

April 28, 2021

  • New base2 images lacked man pages. Used unminimize to restore them in the base image.
  • Introduce a OSSEC host-based IDS lab.

April 13, 2021

  • CyberCIEGE lab failed because X11 socket was not relocated prior to starting Wine via fixlocal.

April 9, 2021

  • New gdb-cpp tutorial lab for using GDB on a simple C++ program.
  • Floating point exceptions were revealing use of exec_wrap.sh for stdin/stdout mirroring.

April 7, 2021

  • ldap lab failed when moved to Ubuntu 20. Problem traced to problem with nscd cache of pwd. Move ldap to Ubuntu 20

March 23, 2021

  • Parameterizing with RANDOM did not include the upper bound.
  • Add optional step parameter to RANDOM, e.g., to ensure word boundaries.
  • db-access lab: add mysql-workbench to database computer.
  • New overrun lab to illustrate memory references beyond bounds of c data structures.
  • New printf lab to introduce memory references made by the printf function.

March 19, 2021

  • gradelab ignore makdirs error, problem with Windows rmtree on shared folders.
  • gradelab handle spaces in student zip file names.
  • gradelab handle zip file names from Moodle, including build downloads.

March 12, 2021

  • labedit UI: Remove old wireshark image from list of base images.
  • labedit UI: Increase some font sizes.
  • grader web interface failed to display lab manuals if the manual name does not follow naming conventions.

March 11, 2021

  • labedit UI add registry setting in new global lab configuration panel.

March 10, 2021

  • labedit UI fixes to not build if syntax error in lab
  • labedit UI "Lab running" indicator fix to reflect current lab.

March 8, 2021

  • Deprecate use of HOST_HOME_XFER, all labs use directory per the labtainer.config file.
  • Add documentation comment to start.config for REGISTRY and BASE_REGISTRY

March 5, 2021

  • Error handling on gradelab web interface when missing results.
  • labedit addition of precheck, msc bug fixes.

February 26, 2021

  • The dmz-example lab had errors in routing and setup of dnsmasq on some components.

February 18, 2021

  • UI was rebuilding images because it was updating file times without cause
  • Clean up UI code to remove some redundant data copies.

February 14, 2021

  • Add local build option to UI
  • Create empty faux_init for centos6 bases.

February 11, 2021

  • Fix UI handling of editing files. Revise layout and eliminate unused fields.
  • Add ubuntu20 base2 base configuration along with ssh2, network2 and wireshark2
  • The new wireshark solves the prolem of black/noise windows.
  • Map /tmp/.X11-unix to /var/tmp and create a link. Needed for ubuntu20 (was deleting /tmp?) and may fix others.

February 4, 2021

  • Add SIZE option to results artifacts
  • Simplify wireshark-intro assessment and parameterization and add PDF lab manual.
  • Provide parameter list values to pregrade.sh script as environment variables
  • enable X11 on the grader
  • put update-designer.sh into users path.

January 19, 2021

  • Change management of README date/rev to update file in source repo.
  • Introduce GUI for creating/editing labs -- see labedit command.

December 21, 2020

  • The gradelab function failed when zip files were copied from a VirtualBox shared folder.
  • Update Instructor Guide to describe management of student zip files on host computers.

December 4, 2020

  • Transition distribution of tar to GitHub releaese artifacts
  • Eliminate seperate designer tar file, use git repo tarball.
  • Testing of grader web functions for analysis of student lab artifacts
  • Clear logs from full smoketest and delete grader container in removelab command.

December 1, 2020

  • The iptables2 lab assessment relied on random ports being "unknown" to nmap.
  • Use a sync diretory to delay smoketests from starting prior to lab startup.
  • Begin integrating Lab designer UI elements.

October 13, 2020

  • Headless configuraions for running on Docker Desktop on Macs & Windows
  • Headless server support, cloud-config file for cloud deployments
  • Testing support for headless configurations
  • Force mynotify to wait until rc.local runs on boot
  • Improve mynotify service ability to merge output into single timestamp
  • Python3 for stopgrade script
  • SimLab now uses docker top rather than system ps

September 26, 2020

  • Clean up the stoplab scripts to ignore non-lab containers
  • Add db-access database access control lab for controlles sharing of a mysql db.

September 17, 2020

  • The macs-hash lab was unable to run Leafpad due to the X11 setting.
  • Grader logging was being redirected to the wrong log file, now captures errors from instructor.py
  • Copy instructor.log from grader to the host logs directory if there is an error.

August 28, 2020

  • Fix install script to use python3-pip and fix broken scripts: getinfo.py and pull-all.py
  • Registry logic was broken, test systems were not using the test registry, add development documentation.
  • Add juiceshop and owasp base files for OWASP-based web security labs
  • Remove unnecessary sudos from check_nets
  • Add CHECK_OK documentation directive for automated assessment
  • Change check_nets to fix iptables and routing issues if so directed.

August 12, 2020

  • Add timeout to prestop scripts
  • Add quiz and checkwork to dmz-lab
  • Restarting the dmz-lab without -r option broke routing out of the ISP.
  • Allow multiple files for time_delim results.

August 6, 2020

  • Bug in error handling when X11 socket is missing
  • Commas in quiz questions led to parse errors
  • Add quiz and checkwork to iptables2 lab

July 28, 2020

  • Add quiz support -- these are guidance quizzes, not assessment quizzes. See the designer guide.
  • Add current-state assessment for use with the checkwork command.

July 21, 2020

  • Add testsets/bin to designer's path
  • Designer guide corrections and explainations for IModule steps.
  • Add RANGE_REGEX result type for defining time ranges using regular expressions on log entries.
  • Check that X11 socket exists if it is needed when starting a lab.
  • Add base image for mysql
  • Handle mysql log timestamp formats in results parsing.

June 15, 2020

  • New base image contianing the Bird open source router
  • Add bird-bgp Border Gateway Protocol lab.
  • Add bird-ospf Open Shortest Path First routing protocol.
  • Improve handling of DNS changes, external access from some containers was blocked in some sites.
  • Add section to Instructor Guide on using Labtainers in environments lacking Internet access.

May 21, 2020

  • Move all repositories to the Docker Hub labtainers registry
  • Support mounts defined in the start.config to allow persistent software installs
  • Change ida lab to use persistent installation of IDA -- new name is ida2
  • Add cgc lab for exploration of over 200 vulnerable services from the DARPA Cyber Grand Challenge
  • Add type_string command to SimLab
  • Add netflow lab for use of NetFlow network traffic analysis
  • Add 64-bit versions of the bufoverflow and the formatstring labs

April 9, 2020

  • Grader failed assessment of CONTAINS and FILE_REGX conditions when wildcards were used for file selection.
  • Include hints for using hexedit in the symlab lab.
  • Add hash_equal operator and hash-goals.py to automated assessment to avoid publishing expected answers in configuration files.
  • Automated assessment for the pcap-lib lab.

April 7, 2020

  • Logs have been moved to $LABTAINER_DIR/logs
  • Other cleanup to permit rebuilds and tests using Jenkins, including use of unique temporary directories for builds
  • Move build support functions out of labutils into build.py
  • Add pcap-lib lab for PCAP library based development of traffic analysis programs

March 13, 2020

  • Add plc-traffic lab for use of GrassMarlin with traffic generated during the lab.
  • Introduce ability to add "tap" containers to collect PCAPs from selected networks.
  • Update GNS3 documentation for external access to containers, and use of dummy_hcd to simulate USB drives.
  • Change kali template to use faux_init rather than attempting to use systemd.
  • Moving distributions (tar files) to box.com
  • Change SimLab use of netstat to not do a dns lookup.

February 26, 2020

  • If labtainer command does not find lab, suggest that user run update-labtainer.sh
  • Add support preliminary support for a network tap component to view all network traffic.
  • Script to fetch lab images to prep VMs that will be used without internet.
  • Provide username and password for nmap-discovery lab.

February 18, 2020

  • Inherit the DISPLAY environment variable from the host (e.g., VM) instead of assuming :0

February 14, 2020

February 11, 2020

  • Update guides to describe remote access to containers withing GNS3 environments
  • Hide selected components and links within GNS3.
  • Figures in the webtrack lab guide were not visible; typos in this and nmap-ssh

February 6, 2020

  • Introduce function to remotely manage containers, e.g., push files.
  • Add GNS3 environment function to simulate insertion of a USB drive.
  • Improve handling of Docker build errors.

February 3, 2020

  • On the metasploit lab, the postgresql service was not running on the victim.
  • Merge the IModule manual content into the Lab Designer guide.
  • More IModule support.

January 27, 2020

  • Introduce initial support for IModules (instructor-developed labs). See docs/imodules.pdf.
  • Fix broken LABTAINER_DIR env variable within update-labtainer
  • Fix access mode on accounting.txt file in ACL lab (had become rw-r-r). Use explicit chmod in fixlocal.sh.

January 14, 2020

  • Port framework and gradelab to Python3 (existing Python2 labs will not change)
    • Use backward compatible random.seed options
    • Hack non-compatable randint to return old values
    • Continue to support python2 for platforms that lack python3 (or those such as the older VM appliance that include python 3.5.2, which breaks random.seed compatability).
    • Add rebuild alias for rebuild.py that will select python2 if needed.
  • Centos-based labs manpages were failing; use mandb within base docker file
  • dmz-lab netmask for DMZ network was wrong (caught by python3); as was IP address of inner gateway in lab manual
  • ghex removed from centos labs -- no longer easily supported by centos 7
  • file-deletion lab must be completed without rebooting the VM, note this in the Lab Manual.
  • Add NO_GW switch to start.config to disable default gateways on containers.
  • Metasploit lab, crashes host VM if runs as privileged; long delays on su if systemd enabled; so run without systemd. Remove use of database from lab manual, configure to use new no_gw switch
  • Update file headers for licensing/terms; add consolidated license file.
  • Modify publish.py to default to use of test registry, use -d to force use of default_registry
  • Revise source control procedures to use different test registry for each branch, and use a premaster branch for final testing of a release.

October 9, 2019

  • Remove dnsmasq from dns component in the dmz-lab. Was causing bind to fail on some installations.

October 8, 2019

  • Syntax error in test registry setup; lab designer info on large files; fetch bigexternal.txt files

September 30, 2019

  • DockerHub registry retrieval again failing for some users. Ignore html prefix to json.

September 20, 2019

  • Assessment of onewayhash should allow hmac operations on file of student's choosing.

September 5, 2019

  • Rebuild metasploit lab, metasploit-framework exhibited a bug. And the labs "treataslocal" file was left out of the move from svn. Fix type in metasploit lab manual.

August 30, 2019

  • Revert test for existence of container directories, they do not always exist.

August 29, 2019

  • Lab image pulls from docker hub failed due to change in github or curl? Catch rediret to cloudflare. Addition of GNS3 support. Fix to dmz-lab dnssec.

July 11, 2019

  • Automated assessment for CentOS6 containers, fix for firefox memory issue, support arbitrary docker create arguments in the start.config file.

June 6, 2019

  • Introduce a Centos6 base, but not support for automated assessment yet

May 23, 2019

  • Automated assessment of setuid-env failed due to typos in field seperators.

May 8, 2019

  • Corrections to Capabilities lab manual

May 2, 2019

  • Acl lab fix to bobstuff.txt permissions. Use explicit chmod in fixlocal.sh
  • Revise student guide to clarify use of stop and -r option in body of the manual.

March 9, 2019

  • The checkwork function was reusing containers, thereby preventing students from eliminating artifacts from previous lab work.
  • Add appendix to the symkey lab to describe the BMP image format.

February 22, 2019

  • The http server failed to start in the vpn and vpn2 labs. Automated assessment removed from those labs until reworked.

January 7, 2019

  • Fix gdblesson automated assessment to at least be operational.

January 27, 2019

  • Fix lab manual for routing-basics2 and fix routing to enable external access to internal web server.

December 29, 2018

  • Fix routing-basics2, same issues as routing-basics, plus an incorret ip address in the gateway resolv.conf

December 5, 2018

  • Fix routing-basics lab, dns resolution at isp and gatway components was broken.

November 14, 2018

  • Remove /run/nologin from archive machine in backups2 -- need general solution for this nologin issue

November, 5, 2018

  • Change file-integrity lab default aid.conf to track metadata changes rather than file modification times

October 22, 2018

  • macs-hash lab resolution verydodgy.com failed on lab restart
  • Notify function failed if notify_cb.sh is missing

October 12, 2018

  • Set ulimit on file size, limit to 1G

October 10, 2018

  • Force collection of parameterized files
  • Explicitly include leafpad and ghex in centos-xtra baseline and rebuild dependent images.

September 28, 2018

  • Fix access modes of shared file in ACL lab
  • Clarify question in pass-crack
  • Modify artifact collection to ignore files older than start of lab.
  • Add quantum computing algorithms lab

September 12, 2018

  • Fix setuid-env grading syntax errors
  • Fix syntax error in iptables2 example firewall rules
  • Rebuild centos labs, move lamp derivatives to use lamp.xtr for waitparam and force httpd to wait for that to finish.

September 7, 2018

  • Add CyberCIEGE as a lab
  • read_pre.txt information display prior to pull of images, and chance to bail.

September 5, 2018

  • Restore sakai bulk download processing to gradelab function.
  • Remove unused instructor scripts.

September 4, 2018

  • Allow multiple IP addresses per network interface
  • Add base image for Wine
  • Add GRFICS virtual ICS simulation

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 23, 2018

  • Add GrassMarlin lab (ICS network discovery)

August 21, 2018

  • Another fix around AWS authentication issues (DockerHub uses AWS).
  • Fix new_lab_setup.py to use git instead of svn.
  • Split plc-forensics lab into a basic lab and and advanced lab (plc-forensics-adv)

August 17, 2018

  • Transition to git & GitHub as authoritative repo.

August 15, 2018

  • Modify plc-forensics lab assessment to be more general; revise lab manual to reflect wireshark on the Labtainer.

August 15, 2018

  • Add "checkwork" command allowing students to view automated assessment results for their lab work.
  • Include logging of iptables packet drops in the iptables2 and the iptables-ics lab.
  • Remove obsolete instances of is_true and is_false from goal.config
  • Fix boolean evaluation to handle "NOT foo", it had expected more operands.

August 9, 2018

  • Support parameter replacement in results.config files
  • Add TIME_DELIM result type for results.config
  • Rework the iptables lab, remove hidden nmap commands, introduce custom service

August 7, 2018

  • Add link to student guide in labtainer-student directory
  • Add link to student guide on VM desktops
  • Fixes to iptables-ics to avoid long delay on shutdown; and fixes to regression tests
  • Add note to guides suggesting student use of VM browser to transfer artifact zip file to instructor.

August 1, 2018

  • Use a generic Docker image for automated assessment; stop creating "instructor" images per lab.

July 30, 2018

  • Document need to unblock the waitparam.service (by creating flag directory) if a fixlocal.sh script is to start a service for which waitparam is a prerequisite.
  • Add plc-app lab for PLC application firewall and whitelisting exercise.

July 25, 2018

  • Add string_contains operator to goals processing
  • Modify assessment of formatstring lab to account for leaked secret not always being at the end of the displayed string.

July 24, 2018

  • Add SSH Agent lab (ssh-agent)

July 20, 2018

  • Support offline building, optionally skip all image pulling
  • Restore apt/yum repo restoration to Dockerfile templates.
  • Handle redirect URL's from Docker registry blob retrieval to avoid authentication errors (Do not rely on curl --location).

July 12, 2018

  • Add prestop feature to allow execution of designer-specified scripts on selected components prior to lab shutdown.
  • Correct host naming in the ssl lab, it was breaking automated assessment.
  • Fix dmz-lab initial state to permit DNS resolutions from inner network.
  • FILE\REGEX processing was not properly handling multiline searches.
  • Framework version derived from newly rebuilt images had incorrect default value.

July 10, 2018

  • Add an LDAP lab
  • Complete transition to systemd based Ubuntu images, remove unused files
  • Move lab_sys tar file to per-container tmp directory for concurrency.

July 6, 2018

  • All Ubuntu base images replaced with versions based on systemd
  • Labtainer container images in registry now tagged with base image ID & have labels reflecting the base image.
  • A given installation will pull and use images that are consistent with the base images it possesses.
  • If you are using a VM image, you may want to replace that with a newer VM image from our website.
  • New labs will not run without downloading newer base images; which can lead to your VM storing multiple versions of large base images (> 500 MB each).
  • Was losing artifacts from processes that were running when lab was stopped -- was not properly killing capinout processes.

June 27, 2018

  • Add support for Ubuntu systemd images
  • Remove old copy of SimLab.py from labtainer-student/bin
  • Move apt and yum sources to /var/tmp
  • Clarify differences between use of "boolean" and "count_greater" in assessments
  • Extend Add-HOST in start.config to include all components on a network.
  • Add option to new_lab_setup.py to add a container based on a copy of an existing container.

June 21, 2018

  • Set DISPLAY env for root
  • Fix to build dependency handling of svn status output
  • Add radius lab
  • Bug in SimLab append corrected
  • Use svn, where appropriate, to change file names with new_lab_setup.py

June 19, 2018

  • Retain order of containers defined in start.conf when creating terminal with multiple tabs
  • Clarify designer manual to identify path to assessment configuration files.
  • Remove prompt for instructor to provide email
  • Botched error checking when testing for version number
  • Include timestamps of lab starts and redos in the assessment json
  • Add an SSL lab that includes bi-directional authentication and creation of certificates.

June 14, 2018

  • Add diagnostics to parameterizing, track down why some install seem to fail on that.
  • If a container is already created, make sure it is parameterized, otherwise bail to avoid corrupt or half-baked containers.
  • Fix program version number to use svn HEAD

June 15, 2018

  • Convert plain text instructions that appeared in xterms into pdf file.
  • Fix bug in version handling of images that have not yet been pulled.
  • Detect occurance of a container that was created, but not parameterized, and prompt the user to restart the lab with the "-r" option.
  • Add designer utility: rm_svn.py so that removed files trigger an image rebuild.

June 13, 2018

  • Install xterm on Ubuntu 18 systems
  • Work around breakage in new versions of gnome-terminal tab handling

June 11, 2018

  • Add version checking to compare images to the framework.
  • Clarify various lab manuals

June 2, 2018

  • When installing on Ubuntu 18, use docker.io instead of docker-ce
  • The capinout caused a crash when a "sudo su" monitored command is followed by a non-elevated user command.
  • Move routing and resolv.conf settings into /etc/rc.local instead of fixlocal.sh so they persist across start/stop of the containers.

May 31, 2018

  • Work around Docker bug that caused text to wrap in a terminal without a line feed.
  • Extend COMMAND_COUNT to account for pipes
  • Create new version of backups lab that includes backups to a remote server and backs up an entire partition.
  • Alter sshlab instructions to use ssh-copy-id utility
  • Delte /run/nologin file from parameterize.sh to permit ssh login on CentOS

May 30, 2018

  • Extended new_lab_setup.py to permit identification of the base image to use
  • Create new version of centos-log that includes centralized logging.
  • Assessment validation was not accepting "time_not_during" option.
  • Begin to integrate Labtainer Master for managing Labtainers from a Docker container.

May 25, 2018

  • Remove 10 second sleeps from various services. Was delaying xinetd responses, breaking automated tests.
  • Fix snort lab grading to only require "CONFIDENTIAL" in the alarm. Remove unused files from lab.
  • Program finish times were not recorded if the program was running when the lab was stopped.

May 21, 2018

  • Fix retlibc grading to remove duplicate goal, was failing automated assessment
  • Remove copies of mynotify.py from individual labs and lab template, it is has been part of lab_sys/sbin, but had not been updated to reflect fixes made for acl lab.

May 18, 2018

  • Mask signal message from exec_wrap so that segv error message looks right.
  • The capinout was sometimes losing stdout, check command stdout on death of cmd.
  • Fix grading of formatstring to catch segmentation fault message.
  • Add type_function feature to SimLab to type stdout of a script (see formatstring simlab).
  • Remove SimLab limitation on combining single/double quotes.
  • Add window_wait directive to SimLab to pause until window with given title can be found.
  • Modify plc lab to alter titles on physical world terminal to reflect status, this also makes testing easier.
  • Fix bufoverflow lab manual link.

May 15, 2018

  • Add appendix on use of the SimLab tool to simulate user performance of labs for regression testing and lab development.
  • Add wait_net function to SimLab to pause until selected network connections terminate.
  • Change acl automated assessment to use FILE_REGEX for multiline matching.
  • SimLab test for xsite lab.

May 11, 2018

  • Add "noskip" file to force collection of files otherwise found in home.tar, needed for retrieving Firefox places.sqlite.
  • Merge sqlite database with write ahead buffer before extracting.
  • Corrections to lab manual for the symkeylab
  • Grading additions for symkeylab and pubkey
  • Improvements to simlab tool: support include, fix window naming.

May 9, 2018

  • Fix parameterization of the file-deletion lab. Correct error its lab manual.
  • Replace use of shell=True in python scripts to reduce processes and allow tracking PIDs
  • Clean up manuals for backups, pass-crack and macs-hash.

May 8, 2018

  • Handle race condition to prevent gnome-terminal from executing its docker command before an xterm instruction terminal runs its command.
  • Don't display errors when instuctor stops a lab started with "-d".
  • Change grading of nmap-ssh to better reflect intent of the lab.
  • Several document and script fixes suggested by olberger on github.

May 7, 2018

  • Use C-based capinout program instead of the old capinout.sh to capture stdin and stdout. See trunk/src-tool/capinout. Removes limitations associated with use ctrl-C to break monitored programs and the display of passwords in telnet and ssh.
  • Include support for saki bulk_download zip processing to extract seperatly submitted reports, and summarizes missing submits.
  • Add checks to user-provided email to ensure they are printable characters.
  • While grading, if user-supplied email does not match zip file name, proceed to grade the results, but include note in the table reflecting cheating. Require to recover from cases where student enters garbage for an email address.
  • Change telnetlab grading to not look at tcpdump output for passwords -- capinout fix leads to correct character-at-a-time transmission to server.
  • Fix typo in install-docker.sh and use sudo to alter docker dns setting in that script.

April 26, 2018

  • Transition to use of "labtainer" to start lab, and "stoplab" to stop it.
  • Add --version option to labtainer command.
  • Add log_ts and log_range result types, and time_not_during goal operators. Revamp the centos-log and sys-log grading to use these features.
  • Put labsys.tar into /var/tmp instead of /tmp, sometimes would get deleted before expanded
  • Running X applications as root fails after reboot of VM.
  • Add "User Command" man pages to CentOS based labs
  • Fix recent bug that prevented collection of docs files from students
  • Modify smoke-tests to only compare student-specific result line, void of whitespace

April 20, 2018

  • The denyhosts service fails to start the first time, moved start to student_startup.sh.
  • Move all faux_init services until after parameterization -- rsyslog was failing to start on second boot of container. April 19, 2018
  • The acl lab failed to properly assess performance of the trojan horse step.
  • Collect student documents by default.
  • The denyhost lab changed to reflect that denyhosts (or tcp wrappers?) now modifies iptables. Also, the denyhosts service was failing to start on some occasions.
  • When updating Labtainers, do not overwrite files that are newer than those in the archive -- preserve student lab reports.

April 12, 2018

  • Add documentation for the purpose of lab goals, and display this for the instructor when the instructor starts a lab.
  • Correct use of the precheck function when the program is in treataslocal, pass capintout.sh the full program path.
  • Copy instr_config files at run time rather than during image build.
  • Add Designer Guide section on debugging automated assessment.
  • Incorrect case in lab report file names.
  • Unncessary chown function caused instructor.py to sometimes crash.
  • Support for automated testing of labs (see SimLab and smoketest).
  • Move testsets and distrib under trunk

April 5, 2018

  • Revise Firefox profile to remove "you've not use firefox in a while..." message.
  • Remove unnessary pulls from registry -- get image dates via docker hub API instead.

March 28, 2018

  • Use explicit tar instead of "docker cp" for system files (Docker does not follow links.)
  • Fix backups lab use separate file system and update the manual.

March 26, 2018

  • Support for multi-user modes (see Lab Designer User Guide).
  • Removed build dependency on the lab_bin and lab_sys files. Those are now copied during parameterization of the lab.
  • Move capinout.sh to /sbin so it can be found when running as root.

March 21, 2018

  • Add CLONE to permit multiple instances of the same container, e.g., for labs shared by multiple concurrent students.
  • Adapt kali-test lab to provide example of macvlan and CLONE
  • Copy the capinout.sh script to /sbin so root can find it after a sudo su.

March 15, 2018

  • Support macvlan networks for communications with external hosts
  • Add a Kali linux base, and a Metasploitable 2 image (see kali-test)

March 8, 2018

  • Do not require labname when using stop.py
  • Catch errors caused by stray networks and advise user on a fix
  • Add support for use of local apt & yum repos at NPS

February 21, 2018

  • Add dmz-lab
  • Change "checklocal" to "precheck", reflecting it runs prior to the command.
  • Decouple inotify event reporting from use of precheck.sh, allow inotify event lists to include optional outputfile name.
  • Extend bash hook to root operations, flush that bash_history.
  • Allow parameterization of start.config fields, e.g., for random IP addresses
  • Support monitoring of services started via systemctl or /etc/init.d
  • Introduce time delimeter qualifiers to organize a timestamped log file into ranges delimited by some configuration change of interest (see dmz-lab)

February 5, 2018

  • Boolean values from results.config files are now treated as goal values
  • Add regular expression support for identifying artifact results.
  • Support for alternate Docker registries, including a local test registry for testing
  • Msc fixes to labs and lab manuals
  • The capinout monitoring hook was not killing child processes on exit.
  • Kill monitored processes before collecting artifacts
  • Add labtainer.wireshark as a baseline container, clean up dockerfiles

January 30, 2018

  • Add snort lab
  • Integrate log file timestamps, e.g., from syslogs, into timestamped results.
  • Remove undefined result values from intermediate timestamped json result files.
  • Alter the time_during goal assessment operation to associate timestamps with the resulting goal value.

January 24, 2018

  • Use of tabbed windows caused instructor side to fail, use of double quotes.
  • Ignore files in _tar directories (other than .tar) when determining build dependencies.


SideWinder Hackers Launched Over a 1,000 Cyber Attacks Over the Past 2 Years

An "aggressive" advanced persistent threat (APT) group known as SideWinder has been linked to over 1,000 new attacks since April 2020. "Some of the main characteristics of this threat actor that make it stand out among the others, are the sheer number, high frequency and persistence of their attacks and the large collection of encrypted and obfuscated malicious components used in their

YODA Tool Found ~47,000 Malicious WordPress Plugins Installed in Over 24,000 Sites

As many as 47,337 malicious plugins have been uncovered on 24,931 unique websites, out of which 3,685 plugins were sold on legitimate marketplaces, netting the attackers $41,500 in illegal revenues. The findings come from a new tool called YODA that aims to detect rogue WordPress plugins and track down their origin, according to an 8-year-long study conducted by a group of researchers from the

More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator

This blog was also published by APNIC.

With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.

That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.

That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.

While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.

Exploring the Mystery

One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)

Distribution of second-level label lengths in NS queries from AS 15169
Figure 1: Distribution of second-level label lengths in queries to root name servers, comparing AS 15169 to others, for several different TLDs.

Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)

Daily count of NS Queries from AS 15169
Figure 2: Historical data shows the mysterious queries began in late 2019.

Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.

Fraction of SLDs seen at A/J-root also seen at TLD (AS 15169 queries)
Figure 3: Fraction of queries from AS 15169 appearing on A-root and J-root that also appear on .com and .net name servers, by the length of the second-level label.

The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.

However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.

Queries per day from AS 15169 to A/J-root for names with second-level label length equal to 12 or 13 (July 6, 2021)
Figure 4: Queries per day from AS 15169, for names with second-level label length equal to 12 or 13, over a 24-hour period.

From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.

The Missing Piece

These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.

In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.

The Mystery Explained

Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.

With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.

Wrapping Up

Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.

The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.

Researchers Uncover Malware Controlling Thousands of Sites in Parrot TDS Network

The Parrot traffic direction system (TDS) that came to light earlier this year has had a larger impact than previously thought, according to new research. Sucuri, which has been tracking the same campaign since February 2019 under the name "NDSW/NDSX," said that "the malware was one of the top infections" detected in 2021, accounting for more than 61,000 websites. Parrot TDS was documented in

GitLab Issues Security Patch for Critical Account Takeover Vulnerability

GitLab has moved to address a critical security flaw in its service that, if successfully exploited, could result in an account takeover. Tracked as CVE-2022-1680, the issue has a CVSS severity score of 9.9 and was discovered internally by the company. The security flaw affects all versions of GitLab Enterprise Edition (EE) starting from 11.10 before 14.9.5, all versions starting from 14.10

CISA Warned About Critical Vulnerabilities in Illumina's DNA Sequencing Devices

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) and Food and Drug Administration (FDA) have issued an advisory about critical security vulnerabilities in Illumina's next-generation sequencing (NGS) software. Three of the flaws are rated 10 out of 10 for severity on the Common Vulnerability Scoring System (CVSS), with two others having severity ratings of 9.1 and 7.4. The issues

Be Proactive! Shift Security Validation Left

"Shifting (security)" left approach in Software Development Life Cycle (SDLC) means starting security earlier in the process. As organizations realized that software never comes out perfectly and are riddled with many exploitable holes, bugs, and business logic vulnerabilities that require going back to fix and patch, they understood that building secure software requires incorporating and

Cybersecurity awareness training: What is it and what works best?

Give employees the knowledge needed to spot the warning signs of a cyberattack and to understand when they may be putting sensitive data at risk

The post Cybersecurity awareness training: What is it and what works best? appeared first on WeLiveSecurity

AutoPWN Suite - Project For Scanning Vulnerabilities And Exploiting Systems Automatically


AutoPWN Suite is a project for scanning vulnerabilities and exploiting systems automatically.

How does it work?

AutoPWN Suite uses nmap TCP-SYN scan to enumerate the host and detect the version of softwares running on it. After gathering enough information about the host, AutoPWN Suite automatically generates a list of "keywords" to search NIST vulnerability database.

Visit "PWN Spot!" for more information


Demo

AutoPWN Suite has a very user friendly easy to read output.

AutoPWN Suite is a project for scanning vulnerabilities and exploiting systems automatically. (10)

Installation

You can install it using pip. (sudo recommended)

sudo pip install autopwn-suite

OR

You can clone the repo.

git clone https://github.com/GamehunterKaan/AutoPWN-Suite.git

OR

You can download debian (deb) package from releases.

sudo apt-get install ./autopwn-suite_1.1.5.deb

Usage

Running with root privileges (sudo) is always recommended.

Automatic mode (This is the intended way of using AutoPWN Suite.)

autopwn-suite -y

Help Menu

vulnerability detection for faster scanning. (Default : None) -y, --yesplease Don't ask for anything. (Full automatic mode) -m {evade,noise,normal}, --mode {evade,noise,normal} Scan mode. -nt TIMEOUT, --noisetimeout TIMEOUT Noise mode timeout. (Default : None) -c CONFIG, --config CONFIG Specify a config file to use. (Default : None) -v, --version Print version and exit.">
$ autopwn-suite -h
usage: autopwn.py [-h] [-o OUTPUT] [-t TARGET] [-hf HOSTFILE] [-st {arp,ping}] [-nf NMAPFLAGS] [-s {0,1,2,3,4,5}] [-a API] [-y] [-m {evade,noise,normal}] [-nt TIMEOUT] [-c CONFIG] [-v]

AutoPWN Suite

options:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name. (Default : autopwn.log)
-t TARGET, --target TARGET
Target range to scan. This argument overwrites the hostfile argument. (192.168.0.1 or 192.168.0.0/24)
-hf HOSTFILE, --hostfile HOSTFILE
File containing a list of hosts to scan.
-st {arp,ping}, --scantype {arp,ping}
Scan type.
-nf NMAPFLAGS, --nmapflags NMAPFLAGS
Custom nmap flags to use for portscan. (Has to be specified like : -nf="-O")
-s {0,1,2,3,4, 5}, --speed {0,1,2,3,4,5}
Scan speed. (Default : 3)
-a API, --api API Specify API key for vulnerability detection for faster scanning. (Default : None)
-y, --yesplease Don't ask for anything. (Full automatic mode)
-m {evade,noise,normal}, --mode {evade,noise,normal}
Scan mode.
-nt TIMEOUT, --noisetimeout TIMEOUT
Noise mode timeout. (Default : None)
-c CONFIG, --config CONFIG
Specify a config file to use. (Default : None)
-v, --version Print version and exit.

TODO

  • Vulnerability detection based on version.
  • Easy to read output.
  • Function to output results to a file.
  • pypi package for easily installing with just pip install autopwn-suite.
  • Automatically install nmap if its not installed.
  • Noise mode. (Does nothing but creating a lot of noise)
  • .deb package for Debian based systems like Kali Linux and Parrot Security.
  • Argument for passing custom nmap flags.
  • Config file argument to specify configurations in a seperate config file.
  • Function to automatically download exploit related to vulnerability.
  • Arch Linux package for Arch based systems like BlackArch and ArchAttack.
  • Seperate script for checking local privilege escalation vulnerabilities.
  • Windows and OSX support.
  • Functionality to brute force common services like ssh, vnc, ftp etc.
  • Built in reverse shell handler that automatically stabilizes shell like pwncat.
  • Function to generate reverse shell commands based on IP and port.
  • GUI interface.
  • Meterpreter payload generator with common evasion techniques.
  • Fileless malware unique to AutoPWN Suite.
  • Daemon mode.
  • Option to email the results automatically.
  • Web application analysis.
  • Web application content discovery mode. (dirbusting)
  • Option to use as a module.

Contributing to AutoPWN Suite

I would be glad if you are willing to contribute this project. I am looking forward to merge your pull request unless its something that is not needed or just a personal preference. Click here for more info!

Legal

You may not rent or lease, distribute, modify, sell or transfer the software to a third party. AutoPWN Suite is free for distribution, and modification with the condition that credit is provided to the creator and not used for commercial use. You may not use software for illegal or nefarious purposes. No liability for consequential damages to the maximum extent permitted by all applicable laws.

Support or Contact

Having trouble using this tool? You can reach me out on discord, create an issue or create a discussion!



New Privacy Framework for IoT Devices Gives Users Control Over Data Sharing

A newly designed privacy-sensitive architecture aims to enable developers to create smart home apps in a manner that addresses data sharing concerns and puts users in control over their personal information.  Dubbed Peekaboo by researchers from Carnegie Mellon University, the system "leverages an in-home hub to pre-process and minimize outgoing data in a structured and enforceable manner before

RSA – Digital healthcare meets security, but does it really want to?

Technology is understandably viewed as a nuisance to be managed in pursuit of the health organizations’ primary mission

The post RSA – Digital healthcare meets security, but does it really want to? appeared first on WeLiveSecurity

Researchers Disclose Critical Flaws in Industrial Access Control System from Carrier

As many as eight zero-day vulnerabilities have been disclosed in Carrier's LenelS2 HID Mercury access control system that's used widely in healthcare, education, transportation, and government facilities. "The vulnerabilities uncovered allowed us to demonstrate the ability to remotely unlock and lock doors, subvert alarms and undermine logging and notification systems," Trellix security

RSA – APIs, your organization’s dedicated backdoors

API-based data transfer is so rapid, there’s but little time to stop very bad things happening quickly

The post RSA – APIs, your organization’s dedicated backdoors appeared first on WeLiveSecurity

Instagram credentials Stealer: Disguised as Mod App

Authored by Dexter Shin 

McAfee’s Mobile Research Team introduced a new Android malware targeting Instagram users who want to increase their followers or likes in the last post. As we researched more about this threat, we found another malware type that uses different technical methods to steal user’s credentials. The target is users who are not satisfied with the default functions provided by Instagram. Various Instagram modification application already exists for those users on the Internet. The new malware we found pretends to be a popular mod app and steals Instagram credentials. 

Behavior analysis 

Instander is one of the famous Instagram modification applications available for Android devices to help Instagram users access extra helpful features. The mod app supports uploading high-quality images and downloading posted photos and videos. 

The initial screens of this malware and Instander are similar, as shown below. 

Figure 1. Instander legitimate app(Left) and Mmalware(Right) 

Next, this malware requests an account (username or email) and password. Finally, this malware displays an error message regardless of whether the login information is correct. 

Figure 2. Malware requests account and password 

The malware steals the user’s username and password in a very unique way. The main trick is to use the Firebase API. First, the user input value is combined with l@gmail.com. This value and static password(=kamalw20051) are then sent via the Firebase API, createUserWithEmailAndPassword. And next, the password process is the same. After receiving the user’s account and password input, this malware will request it twice. 

Figure 3. Main method to use Firebase API
Figure 3. Main method to use Firebase API

Since we cannot see the dashboard of the malware author, we tested it using the same API. As a result, we checked the user input value in plain text on the dashboard. 

Figure 4. Firebase dashboard built for testing
Figure 4. Firebase dashboard built for testing

According to the Firebase document, createUserWithEmailAndPassword API is to create a new user account associated with the specified email address and password. Because the first parameter is defined as email patterns, the malware author uses the above code to create email patterns regardless of user input values. 

It is an API for creating accounts in the Firebase so that the administrator can check the account name in the Firebase dashboard. The victim’s account and password have been requested as Firebase account name, so it should be seen as plain text without hashing or masking. 

Network traffic 

As an interesting point on the network traffic of the malware, this malware communicates with the Firebase server in Protobuf format in the network. The initial configuration of this Firebase API uses the JSON format. Although the Protobuf format is readable enough, it can be assumed that this malware author intentionally attempts to obfuscate the network traffic through the additional settings. Also, the domain used for data transfer(=www.googleapis.com) is managed by Google. Because it is a domain that is too common and not dangerous, many network filtering and firewall solutions do not detect it. 

Conclusion 

As mentioned, users should always be careful about installing 3rd party apps. Aside from the types of malware we’ve introduced so far, attackers are trying to steal users’ credentials in a variety of ways. Therefore, you should employ security software on your mobile devices and always keep up to date. 

Fortunately, McAfee Mobile Security is able to detect this as Android/InstaStealer and protect you from similar threats. For more information visit  McAfee Mobile Security 

Indicators of Compromise 

SHA256: 

  • 238a040fc53ba1f27c77943be88167d23ed502495fd83f501004356efdc22a39 

The post Instagram credentials Stealer: Disguised as Mod App appeared first on McAfee Blog.

MIT Researchers Discover New Flaw in Apple M1 CPUs That Can't Be Patched

A novel hardware attack dubbed PACMAN has been demonstrated against Apple's M1 processor chipsets, potentially arming a malicious actor with the capability to gain arbitrary code execution on macOS systems. It leverages "speculative execution attacks to bypass an important memory protection mechanism, ARM Pointer Authentication, a security feature that is used to enforce pointer integrity," MIT

Iranian Hackers Spotted Using a new DNS Hijacking Malware in Recent Attacks

The Iranian state-sponsored threat actor tracked under the moniker Lyceum has turned to using a new custom .NET-based backdoor in recent campaigns directed against the Middle East. "The new malware is a .NET based DNS Backdoor which is a customized version of the open source tool 'DIG.net,'" Zscaler ThreatLabz researchers Niraj Shivtarkar and Avinash Kumar said in a report published last week. "

Researchers Disclose Rooting Backdoor in Mitel IP Phones for Businesses

Cybersecurity researchers have disclosed details of two medium-security flaws in Mitel 6800/6900 desk phones that, if successfully exploited, could allow an attacker to gain root privileges on the devices. Tracked as CVE-2022-29854 and CVE-2022-29855 (CVSS score: 6.8), the access control issues were discovered by German penetration testing firm SySS, following which patches were shipped in May

Chinese Hackers Distribute Backdoored Web3 Wallets for iOS and Android Users

A technically sophisticated threat actor known as SeaFlower has been targeting Android and iOS users as part of an extensive campaign that mimics official cryptocurrency wallet websites intending to distribute backdoored apps that drain victims' funds. Said to be first discovered in March 2022, the cluster of activity "hint[s] to a strong relationship with a Chinese-speaking entity yet to be

Researchers Detail PureCrypter Loader Cyber Criminals Using to Distribute Malware

Cybersecurity researchers have detailed the workings of a fully-featured malware loader dubbed PureCrypter that's being purchased by cyber criminals to deliver remote access trojans (RATs) and information stealers. "The loader is a .NET executable obfuscated with SmartAssembly and makes use of compression, encryption, and obfuscation to evade antivirus software products," Zscaler's Romain Dumont

Comprehensive, Easy Cybersecurity for Lean IT Security Teams Starts with XDR

Breaches don't just happen to large enterprises. Threat actors are increasingly targeting small businesses. In fact, 43% of data breaches involved small to medium-sized businesses. But there is a glaring discrepancy. Larger businesses typically have the budget to keep their lights on if they are breached. Most small businesses (83%), however, don't have the financial resources to recover if they

As Internet-Connected Medical Devices Multiply, So Do Challenges

To consumers, the Internet of Things might bring to mind a smart fridge that lets you know when to buy more eggs, or the ability to control your home’s lighting and temperature remotely through your phone.

But for cybersecurity professionals, internet-connected medical devices are more likely to be top-of-mind.

Not only is the Internet of Medical Things, or IoMT, surging — with the global market projected to reach $160 billion by 2027, according to Emergen Research — the stakes can be quite high, and sometimes even matters of life or death.

The risk to the individual patients is very small, experts caution, noting bad actors are far more likely to disrupt hospital operations, use unsecure devices to access other parts of the network or hold machines and data hostage for ransom.

“When people ask me, ’Should I be worried?’ I tell them no, and here’s why,” said Matthew Clapham, a veteran product cybersecurity specialist. “In the medical space, every single time I’ve probed areas that could potentially compromise patient safety, I’ve always been impressed with what I’ve found.”

That doesn’t mean the risk is zero, noted Christos Sarris, a longtime information security analyst. He shared an anecdote in Cisco Secure’s recent e-book, “Building Security Resilience,” about finding malware on an intensive care unit device that compromised a pump used to deliver precise doses of medicine.

Luckily, the threat, which was included in a vendor-provided patch, was caught during testing.

“The self-validation was fine,” Sarris said in a follow-up interview. “The vendor’s technicians signed off on it. So we only found this usual behavior because we tested the system for several days before returning it to use.”

But because such testing protocols take valuable equipment out of service and soak up the attention of often-stretched IT teams, they’re not the norm everywhere, he added.

Sarris and Clapham were among several security experts we spoke to for a deeper dive into the challenges of IoT medical device security and top-line strategies for protecting patients and hospitals.

Every device is different

Connected medical devices are becoming so integral to modern health care that a single hospital room might have 20 of them, Penn Medicine’s Dan Costantino noted in Healthcare IT News.

Sarris, who is currently an information security manager at Sainsbury’s, outlined some of the challenges this reality presents for hospital IT teams.

Health care IT teams are responsible for devices made by a multiplicity of vendors — including large, well-known brands, cheaper off-brand vendors, and small manufacturers of highly speciality instruments, he said. That’s a lot to keep up with, and teams don’t always have direct access to operating systems, patching and security testing, and instead are reliant on vendors to provide necessary updates and maintenance.

“Even today, you will rarely see proper security testing on these devices,” he said. “The biggest challenge is the environment. It’s not tens, it’s hundreds of devices. And each device is designed for a specific purpose. It has its own operating system, its own operational needs and so forth. So it’s very, very difficult — the IT teams can’t know everything.”

Cisco Advisory CISO Wolfgang Goerlich noted that one unique challenge for securing medical devices is that they often can’t be patched or replaced. Capital outlays are high and devices might be kept in service for a decade or more.

“So we effectively have a small window of time — which can be measured in hours or years, depending on how fortunate we are — where a device is not vulnerable to any known attacks,” he said. “And then, when they do become vulnerable, we have a long-tailed window of vulnerability.”

Or, as Clapham summed it up, “The bits are going to break down much faster than the iron.”

The Food and Drug Administration is taking the issue seriously, however, and actively working to improve how security risks are addressed throughout a device’s life cycle, as well as to mandate better disclosure of vulnerabilities when they are discovered.

“FDA seeks to require that devices have the capability to be updated and patched in a timely manner; that premarket submissions to FDA include evidence demonstrating the capability from a design and architecture perspective for device updating and patching… and that device firms publicly disclose when they learn of a cybersecurity vulnerability so users know when a device they use may be vulnerable and to provide direction to customers to reduce their risk,” Kevin Fu, acting director of medical device security at the FDA’s Center for Devices and Radiological Health explained to explained to MedTech Dive last year.

The network side

For hospitals and other health care providers, improving the security posture of connected devices boils down to a few key, and somewhat obvious, things: attention to network security, attention to other fundamentals like a zero-trust security framework more broadly, and investing in the necessary staffing and time do to the work right, Goerlich said.

“If everything is properly segmented, the risk of any of these devices being vulnerable and exploited goes way, way down,” he said. “But getting to that point is a journey.”

Sarris agrees, noting many hospitals have flat networks — that is, they reduce the cost and effort needed to administer them by keeping everything connected in a single domain or subdomain. Isolating these critical and potentially vulnerable devices from the rest of the network improves security, but increases the complexity and costs of oversight, including for things like providing remote access to vendors so they can provide support.

“It’s important to connect these devices into a network that’s specifically designed around the challenges they present,” Sarris said. “You may not have security control on the devices themselves, but you can have security controls around them. You can use micro segmentation, you can use network monitoring, et cetera. Some of these systems, they’re handling a lot of sensitive information and they don’t even support the encryption of data in transit — it can really be all over the place.”

The device side

The COVID-19 pandemic put a lot of financial pressure on health systems, Goerlich noted. During the virus’ peaks, many non-emergency procedures were delayed or canceled, hitting hospitals’ bottom lines pretty hard over several years. This put even greater pressure on already strained cybersecurity budgets at a time of increasing needs.

“Again, devices have time as a security property,” Goerlich said, “which means we’ve got two years of vulnerabilities that may not have been addressed. And which also probably means we’re going to try to push the lifecycle of that equipment out and try to maintain it for two more years.”

Clapham, who previously served as director of cybersecurity for software and the cloud at GE Healthcare, said device manufacturers are working hard to ensure new devices are as secure as they can be when they’re first rolled out and when new features are added through software updates.

“When you’re adding new functionality that might need to talk to a central service somewhere, either locally or in the cloud, that could have implications for security — so that’s where we go in and do our due diligence,” he said.

The revolution that needs to happen is one of mindset, Clapham said. Companies are waking up to the new reality of not just making a well-functioning device that has to last for over a decade, but of making a software suite to support the device that will need to be updated and have new features added over that long lifespan.

This should include adding additional headroom and flexibility in the hardware, he said. While it adds to costs on the front end, it will add longevity as software is updated over time. (Imagine the computer you bought in 2007 trying to run the operating system you have now.)

“Ultimately, customers should expect a secure device, but they should also expect to pay for the additional overhead it will take to make sure that device stays secure over time,” he said. “And manufacturers need to plan for upgradability and the ability to swap out components with minimal downtime.”

Additional Resources


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Critical Flaw in Cisco Secure Email and Web Manager Lets Attackers Bypass Authentication

Cisco on Wednesday rolled out fixes to address a critical security flaw affecting Email Security Appliance (ESA) and Secure Email and Web Manager that could be exploited by an unauthenticated, remote attacker to sidestep authentication. Assigned the CVE identifier CVE-2022-20798, the bypass vulnerability is rated 9.8 out of a maximum of 10 on the CVSS scoring system and stems from improper

Difference Between Agent-Based and Network-Based Internal Vulnerability Scanning

For years, the two most popular methods for internal scanning: agent-based and network-based were considered to be about equal in value, each bringing its own strengths to bear. However, with remote working now the norm in most if not all workplaces, it feels a lot more like agent-based scanning is a must, while network-based scanning is an optional extra. This article will go in-depth on the

Learn Cybersecurity with Palo Alto Networks Through this PCCSA Course @ 93% OFF

In the world of cybersecurity, reputation is everything. Most business owners have little understanding of the technical side, so they have to rely on credibility. Founded back in 2005, Palo Alto Networks is a cybersecurity giant that has earned the trust of the business community thanks to its impressive track record. The company now provides services to over 70,000 organizations in 150

Over a Dozen Flaws Found in Siemens' Industrial Network Management System

Cybersecurity researchers have disclosed details about 15 security flaws in Siemens SINEC network management system (NMS), some of which could be chained by an attacker to achieve remote code execution on affected systems. "The vulnerabilities, if exploited, pose a number of risks to Siemens devices on the network including denial-of-service attacks, credential leaks, and remote code execution

Do You Have Ransomware Insurance? Look at the Fine Print

Insurance exists to protect the insured party against catastrophe, but the insurer needs protection so that its policies are not abused – and that's where the fine print comes in. However, in the case of ransomware insurance, the fine print is becoming contentious and arguably undermining the usefulness of ransomware insurance. In this article, we'll outline why, particularly given the current

Mitigate Ransomware in a Remote-First World

Ransomware has been a thorn in the side of cybersecurity teams for years. With the move to remote and hybrid work, this insidious threat has become even more of a challenge for organizations everywhere. 2021 was a case study in ransomware due to the wide variety of attacks, significant financial and economic impact, and diverse ways that organizations responded. These attacks should be seen as a

Researchers Disclose 56 Vulnerabilities Impacting OT Devices from 10 Vendors

Nearly five dozen security vulnerabilities have been disclosed in devices from 10 operational technology (OT) vendors due to what researchers call are "insecure-by-design practices." Collectively dubbed OT:ICEFALL by Forescout, the 56 issues span as many as 26 device models from Bently Nevada, Emerson, Honeywell, JTEKT, Motorola, Omron, Phoenix Contact, Siemens, and Yokogawa. "Exploiting these

Phishing awareness training: Help your employees avoid the hook

Educating employees about how to spot phishing attacks can strike a much-needed blow for network defenders

The post Phishing awareness training: Help your employees avoid the hook appeared first on WeLiveSecurity

Researchers Uncover Ways to Break the Encryption of 'MEGA' Cloud Storage Service

A new piece of research from academics at ETH Zurich has identified a number of critical security issues in the MEGA cloud storage service that could be leveraged to break the confidentiality and integrity of user data. In a paper titled "MEGA: Malleable Encryption Goes Awry," the researchers point out how MEGA's system does not protect its users against a malicious server, thereby enabling a

Chinese Hackers Distributing SMS Bomber Tool with Malware Hidden Inside

A threat cluster with ties to a hacking group called Tropic Trooper has been spotted using a previously undocumented malware coded in Nim language to strike targets as part of a newly discovered campaign. The novel loader, dubbed Nimbda, is "bundled with a Chinese language greyware 'SMS Bomber' tool that is most likely illegally distributed in the Chinese-speaking web," Israeli cybersecurity

Manual vs. SSPM: Research on What Streamlines SaaS Security Detection & Remediation

When it comes to keeping SaaS stacks secure, IT and security teams need to be able to streamline the detection and remediation of misconfigurations in order to best protect their SaaS stack from threats. However, while companies adopt more and more apps, their increase in SaaS security tools and staff has lagged behind, as found in the 2022 SaaS Security Survey Report.  The survey report,

Learn NIST Inside Out With 21 Hours of Training @ 86% OFF

In cybersecurity, many of the best jobs involve working on government projects. To get a security clearance, you need to prove that you meet NIST standards. Cybersecurity firms are particularly interested in people who understand the RMF, or Risk Management Framework — a U.S. government guideline for taking care of data. The NIST Cybersecurity & Risk Management Frameworks Course helps you

Researchers Warn of 'Matanbuchus' Malware Campaign Dropping Cobalt Strike Beacons

A malware-as-a-service (Maas) dubbed Matanbuchus has been observed spreading through phishing campaigns, ultimately dropping the Cobalt Strike post-exploitation framework on compromised machines. Matanbuchus, like other malware loaders such as BazarLoader, Bumblebee, and Colibri, is engineered to download and execute second-stage executables from command-and-control (C&C) servers on infected

What Are Shadow IDs, and How Are They Crucial in 2022?

Just before last Christmas, in a first-of-a-kind case, JPMorgan was fined $200M for employees using non-sanctioned applications for communicating about financial strategy. No mention of insider trading, naked shorting, or any malevolence. Just employees circumventing regulation using, well, Shadow IT. Not because they tried to obfuscate or hide anything, simply because it was a convenient tool

Critical Security Flaws Identified in CODESYS ICS Automation Software

CODESYS has released patches to address as many as 11 security flaws that, if successfully exploited, could result in information disclosure and a denial-of-service (DoS) condition, among others.  "These vulnerabilities are simple to exploit, and they can be successfully exploited to cause consequences such as sensitive information leakage, PLCs entering a severe fault state, and arbitrary code

ZuoRAT Malware Hijacking Home-Office Routers to Spy on Targeted Networks

A never-before-seen remote access trojan dubbed ZuoRAT has been singling out small office/home office (SOHO) routers as part of a sophisticated campaign targeting North American and European networks. The malware "grants the actor the ability to pivot into the local network and gain access to additional systems on the LAN by hijacking network communications to maintain an undetected foothold,"

New YTStealer Malware Aims to Hijack Accounts of YouTube Content Creators

Cybersecurity researchers have documented a new information-stealing malware that targets YouTube content creators by plundering their authentication cookies. Dubbed "YTStealer" by Intezer, the malicious tool is likely believed to be sold as a service on the dark web, with it distributed using fake installers that also drop RedLine Stealer and Vidar. "What sets YTStealer aside from other

Top of Mind Security Insights from In-Person Interactions

The past few months have been chockfull of conversations with security customers, partners, and industry leaders. After two years of virtual engagements, in-person events like our CISO Forum and Cisco Live as well as the industry’s RSA Conference underscore the power of face-to-face interactions. It’s a reminder of just how enriching conversations are and how incredibly interconnected the world is. And it’s only made closer by the security experiences that impact us all.  

I had the pleasure of engaging with some of the industry’s best and brightest, sharing ideas, insights, and what keeps us up at night. The conversations offered more than an opportunity to reconnect and put faces with names. It was a chance to discuss some of the most critical cybersecurity issues and implications that are top of mind for organizations.  

The collective sentiments are clear. The need for better security has never been so strong. Securing the future is good business. Disruptions are happening faster than ever before, making our interconnected world more unpredictable.  Hybrid work is here to stay, hybrid and complex architectures will continue to be a reality for most organizations and that has dramatically expanded the threat surface. More and more businesses are operating as ecosystems—attacks have profound ripple effects across value chains. Attacks are becoming more bespoke, government-sponsored threat actors and ransomware as a service, continue to unravel challenging businesses to minimize the time from initial breach to complete compromise, in the event of a compromise.  

Digital transformation and Zero Trust 

Regardless of where organizations are on their digital transformations, they are progressively embarking upon journeys to unify networking and secure connectivity needs. Mobility, BYOD (bring your own device), cloud, increased collaboration, and the consumerization of IT have necessitated a new type of access control security–zero trust security. Supporting a modern enterprise across a distributed network and infrastructure involves the ability to validate user IDs, continuously verify authentication and device trust, and protect every application— 

without compromising user experience. Zero trust offers organizations a simpler approach to securing access for everyone, from any device, anywhere—all the while, making it harder for attackers.  

Seeking a simpler, smarter ecosystem 

Simplicity continues to be a hot topic, and in the context of its functionality. In addition to a frictionless user experience, the real value to customers is improving operational challenges. Security practitioners want an easier way to secure the edge, access, and operations—including threat intelligence and response. Key to this simplified experience is connecting and managing business-critical control points and vulnerabilities, exchanging data, and contextualizing threat intelligence. And it requires a smarter ecosystem that brings together capabilities, unifying admin, policy, visibility, and control. Simplicity that works hard and smart—and enhances their security posture. The ultimate simplicity is improved efficacy for the organization. 

Everyone is an insider  

Insider cyber-attacks are among the fastest growing threats in the modern security network, an increasingly common cause of data breaches. Using their authorized access, employees are intentionally or inadvertently causing harm by stealing, exposing, or destroying sensitive company data. Regardless, the consequences are the same—costing companies big bucks and massive disruption. It’s also one of the reasons why “identity as the new perimeter” is trending, as the primary objective of all advanced attacks is to gain privileged credentials. Insider attack attempts are not slowing down. However, advanced telemetry, threat detection and protection, and continuous trusted access all help decelerate the trend. Organizations are better able to expose suspicious or malicious activities caused by insider threats. Innovations are enabling business to analyze all network traffic and historical patterns of employee access and determine whether to let an employee continue uninterrupted or prompt to authenticate again.  

The interconnection conundrum and the ransomware ruse   

Supply chain attacks have become one of the biggest security worries for businesses. Not only are disruptions debilitating, but no one knew the impacts or perceived outcomes. Attackers are highly aware that supply chains are comprised of larger entities often tightly connected to a broad array of smaller and less cyber-savvy organizations. Lured by lucrative payouts, attackers seek the weakest supply chain link for a successful breach. In fact, two of the four biggest cyber-attacks that the Cisco Talos team saw in the field last year were supply chain attacks that deployed ransomware on their targets’ networks: SolarWinds and REvil’s attack exploiting the Kaseya managed service provider. While there’s no perfect way to absolutely protect from ransomware, businesses are taking steps to bolster their defenses and protect against disaster. 

Data privacy is getting personal 

Security incidents targeting personal information are on the rise. In fact, 86 percent of global consumers were victims of identity theft, credit/debit card fraud, or a data breach in 2020. In a recent engagement discovered by the Cisco Talos team, the API on a customer’s website could have been exploited by an attacker to steal sensitive personal information. The good news is governments and businesses alike are leaning into Data Privacy and Protection, adhering to global regulations​ that enforce high standards for collecting, using, disclosing, storing, securing, accessing, transferring, and processing personal data.​ Within the past year, the U.S. government implemented new rules to ensure companies and federal agencies follow required cybersecurity standards. As long as cyber criminals continue seeking to breach our privacy and data, these rules help hold us accountable.  

Through all the insightful discussions with customers, partners, and industry leaders, a theme emerged. When it comes to cybersecurity, preparation is key and the cost of being wrong is extraordinary. By acknowledging there will continue to be disruptions, business can prepare for whatever comes next. And when it comes, they’ll not only weather the storm, but they will also come out of it stronger. And the good news is that Cisco Security Business Group is already on the journey actively addressing these headlines, and empowering our customers to reach their full potential, securely. 


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Amazon Quietly Patches 'High Severity' Vulnerability in Android Photos App

Amazon, in December 2021, patched a high severity vulnerability affecting its Photos app for Android that could have been exploited to steal a user's access tokens. "The Amazon access token is used to authenticate the user across multiple Amazon APIs, some of which contain personal data such as full name, email, and address," Checkmarx researchers João Morais and Pedro Umbelino said. "Others,

New 'SessionManager' Backdoor Targeting Microsoft IIS Servers in the Wild

A newly discovered malware has been put to use in the wild at least since March 2021 to backdoor Microsoft Exchange servers belonging to a wide range of entities worldwide, with infections lingering in 20 organizations as of June 2022. Dubbed SessionManager, the malicious tool masquerades as a module for Internet Information Services (IIS), a web server software for Windows systems, after

Some Worms Use Their Powers for Good

Gardeners know that worms are good. Cybersecurity professionals know that worms are bad. Very bad. In fact, worms are literally the most devasting force for evil known to the computing world. The MyDoom worm holds the dubious position of most costly computer malware ever – responsible for some $52 billion in damage. In second place… Sobig, another worm. It turns out, however, that there are

Cyberattacks: A very real existential threat to organizations

One in five organizations have teetered on the brink of insolvency after a cyberattack. Can your company keep hackers at bay?

The post Cyberattacks: A very real existential threat to organizations appeared first on WeLiveSecurity

Researchers Share Techniques to Uncover Anonymized Ransomware Sites on Dark Web

Cybersecurity researchers have detailed the various measures ransomware actors have taken to obscure their true identity online as well as the hosting location of their web server infrastructure. "Most ransomware operators use hosting providers outside their country of origin (such as Sweden, Germany, and Singapore) to host their ransomware operations sites," Cisco Talos researcher Paul Eubanks 

Researchers Uncover Malicious NPM Packages Stealing Data from Apps and Web Forms

A widespread software supply chain attack has targeted the NPM package manager at least since December 2021 with rogue modules designed to steal data entered in forms by users on websites that include them. The coordinated attack, dubbed IconBurst by ReversingLabs, involves no fewer than two dozen NPM packages that include obfuscated JavaScript, which comes with malicious code to harvest

Hive Ransomware Upgrades to Rust for More Sophisticated Encryption Method

The operators of the Hive ransomware-as-a-service (RaaS) scheme have overhauled their file-encrypting software to fully migrate to Rust and adopt a more sophisticated encryption method. "With its latest variant carrying several major upgrades, Hive also proves it's one of the fastest evolving ransomware families, exemplifying the continuously changing ransomware ecosystem," Microsoft Threat
❌