Welcome to our weekly roundup, where we share what you need to know about the cybersecurity news and events that happened over the past few days. This week, read about one of Microsoft’s largest Patch Tuesday updates ever, including fixes for 120 vulnerabilities and two zero-days. Also, learn about Trend Micro’s new integrations with Amazon Web Services (AWS).
Read on:
Microsoft Patches 120 Vulnerabilities, Two Zero-Days
This week Microsoft released fixes for 120 vulnerabilities, including two zero-days, in 13 products and services as part of its monthly Patch Tuesday rollout. The August release marks its third-largest Patch Tuesday update, bringing the total number of security fixes for 2020 to 862. “If they maintain this pace, it’s quite possible for them to ship more than 1,300 patches this year,” says Dustin Childs of Trend Micro’s Zero-Day Initiative (ZDI).
Trend Micro has discovered an unusual infection related to Xcode developer projects. Upon further investigation, it was discovered that a developer’s Xcode project at large contained the source malware, which leads to a rabbit hole of malicious payloads. Most notable in our investigation is the discovery of two zero-day exploits: one is used to steal cookies via a flaw in the behavior of Data Vaults, another is used to abuse the development version of Safari.
Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 1
We’re all now living in a post-COVID-19 world characterized by uncertainty, mass home working and remote learning. To help you adapt to these new conditions while protecting what matters most, Trend Micro has developed a two-part blog series on ‘the new normal’. Part one identifies the scope and specific cyber-threats of the new normal.
Trend Micro enhances agility and automation in cloud security through integrations with Amazon Web Services (AWS). Through this collaboration, Trend Micro Cloud One offers the broadest platform support and API integration to protect AWS infrastructure whether building with Amazon Elastic Compute Cloud (Amazon EC2) instances, AWS Lambda, AWS Fargate, containers, Amazon Simple Storage Service (Amazon S3), or Amazon Virtual Private Cloud (Amazon VPC) networking.
Shedding Light on Security Considerations in Serverless Cloud Architectures
The big shift to serverless computing is imminent. According to a 2019 survey, 21% of enterprises have already adopted serverless technology, while 39% are considering it. Trend Micro’s new research on serverless computing aims to shed light on the security considerations in serverless environments and help adopters in keeping their serverless deployments as secure as possible.
In One Click: Amazon Alexa Could be Exploited for Theft of Voice History, PII, Skill Tampering
Amazon’s Alexa voice assistant could be exploited to hand over user data due to security vulnerabilities in the service’s subdomains. The smart assistant, which is found in devices such as the Amazon Echo and Echo Dot — with over 200 million shipments worldwide — was vulnerable to attackers seeking user personally identifiable information (PII) and voice recordings.
New Attack Lets Hackers Decrypt VoLTE Encryption to Spy on Phone Calls
A team of academic researchers presented a new attack called ‘ReVoLTE,’ that could let remote attackers break the encryption used by VoLTE voice calls and spy on targeted phone calls. The attack doesn’t exploit any flaw in the Voice over LTE (VoLTE) protocol; instead, it leverages weak implementation of the LTE mobile network by most telecommunication providers in practice, allowing an attacker to eavesdrop on the encrypted phone calls made by targeted victims.
An Advanced Group Specializing in Corporate Espionage is on a Hacking Spree
A Russian-speaking hacking group specializing in corporate espionage has carried out 26 campaigns since 2018 in attempts to steal vast amounts of data from the private sector, according to new findings. The hacking group, dubbed RedCurl, stole confidential corporate documents including contracts, financial documents, employee records and legal records, according to research published this week by the security firm Group-IB.
Walgreens Discloses Data Breach Impacting Personal Health Information of More Than 72,000 Customers
The second-largest pharmacy chain in the U.S. recently disclosed a data breach that may have compromised the personal health information (PHI) of more than 72,000 individuals across the United States. According to Walgreens spokesman Jim Cohn, prescription information of customers was stolen during May protests, when around 180 of the company’s 9,277 locations were looted.
Top Tips for Home Cybersecurity and Privacy in a Coronavirus-Impacted World: Part 2
The past few months have seen radical changes to our work and home life under the Coronavirus threat, upending norms and confining millions of American families within just four walls. In this context, it’s not surprising that more of us are spending an increasing portion of our lives online. In the final blog of this two-part series, Trend Micro discusses what you can do to protect your family, your data, and access to your corporate accounts.
What are your thoughts on Trend Micro’s tips to make your home cybersecurity and privacy stronger in the COVID-19-impacted world? Share your thoughts in the comments below or follow me on Twitter to continue the conversation: @JonLClay.
The post This Week in Security News: Microsoft Patches 120 Vulnerabilities, Including Two Zero-Days and Trend Micro Brings DevOps Agility and Automation to Security Operations Through Integration with AWS Solutions appeared first on .
Introduction Whether your organization is tired of being held back by the cybersecurity workforce skills gap or your management team has watched the worst that a cyberattack could do to a peer organization, the time has come to do something about it. One of the best decisions your organization can make is to explore how […]
The post How to pick the best cyber range for your cybersecurity training needs and budget appeared first on Infosec Resources.
Note: This article originally appeared in Verisign’s Q3 2020 Domain Name Industry Brief.
Verisign is deeply committed to protecting our critical internet infrastructure from potential cybersecurity threats, and to keeping up to date on the changing cyber landscape.
Over the years, cybercriminals have grown more sophisticated, adapting to changing business practices and diversifying their approaches in non-traditional ways. We have seen security threats continue to evolve in 2020, as many businesses have shifted to a work from home posture due to the COVID-19 pandemic. For example, the phenomenon of “Zoom-bombing” video meetings and online learning sessions had not been a widespread issue until, suddenly, it became one.
As more people began accessing company applications and files over their home networks, IT departments implemented new tools and set new policies to find the right balance between protecting company assets and sensitive information, and enabling employees to be just as productive at home as they would be in the office. Even the exponential jump in the use of home-networked printers that might or might not be properly secured represented a new security consideration for some corporate IT teams.
An increase in phishing scams accompanied this shift in working patterns. About a month after much of the global workforce began working from home in greater numbers, the Federal Bureau of Investigation (FBI) reported about a 300 percent to 400 percent spike in cybersecurity complaints received by its Internet Crime Complaint Center (IC3) each day. According to the International Criminal Police Organization (Interpol), “[o]f global cyber-scams, 59% are coming in the form of spear phishing.” These phishing campaigns targeted an array of sectors, such as healthcare and government agencies, by imitating health experts or COVID-related charities.
Proactive steps can help businesses improve their cybersecurity hygiene and guard against phishing scams. One of these steps is for companies to focus part of their efforts on educating employees on how to detect and avoid malicious websites in phishing emails. Companies can start by building employee understanding of how to identify the destination domain of a URL (Uniform Resource Locator – commonly referring to as “links”) embedded in an email that may be malicious. URLs can be complex and confusing and cybercriminals, who are well aware of that complexity, often use deceptive tactics within the URLs to mask the malicious destination domain. Companies can take proactive steps to inform their employees of these deceptive tactics and help them avoid malicious websites. Some of the most common tactics are described in Table 1 below.
Tactic | What is it? |
Combosquatting | Adding words such as “secure,” “login” or “account” to a familiar domain name to trick users into thinking it is affiliated with the known domain name. |
Typosquatting | Using domain names that resemble a familiar name but incorporate common typographical mistakes, such as reversing letters or leaving out or adding a character. |
Levelsquatting | Using familiar names/domain names as part of a subdomain within a URL, making it difficult to discover the real destination domain. |
Homograph attacks | Using homograph, or lookalike, domain names, such as substituting the uppercase “I” or number “1” where a lowercase “L” should have been used, or using “é” instead of an “e.” |
Misplaced domain | Planting familiar domain names within the URL as a way of adding a familiar domain name into a complex-looking URL. The familiar domain name could be found in a path (after a “/”), as part of the additional parameters (after a “?”), as an anchor/fragment identifier (after a “#”) or in the HTTP credentials (before “@”). |
URL-encoded characters | Placing URL-encoded characters (%[code]), which are sometimes used in URL parameters, into the domain name itself. |
Teaching users to find and understand the domain portion of the URL can have lasting and positive effects on an organization’s ability to avoid phishing links. By providing employees (and their families) with this basic information, companies can better protect themselves against cybersecurity issues such as compromised networks, financial losses and data breaches.
To learn more about what you can do to protect yourself and your business against possible cyber threats, check out the STOP. THINK. CONNECT. campaign online at https://www.stopthinkconnect.org. STOP. THINK. CONNECT. is a global online safety awareness campaign led by the National Cyber Security Alliance and in partnership with the Anti-Phishing Working Group to help all digital citizens stay safer and more secure online.
The post Cybersecurity Considerations in the Work-From-Home Era appeared first on Verisign Blog.
For over a decade, the Internet Corporation for Assigned Names and Numbers (ICANN) and its multi-stakeholder community have engaged in an extended dialogue on the topic of DNS abuse, and the need to define, measure and mitigate DNS-related security threats. With increasing global reliance on the internet and DNS for communication, connectivity and commerce, the members of this community have important parts to play in identifying, reporting and mitigating illegal or harmful behavior, within their respective roles and capabilities.
As we consider the path forward on necessary and appropriate steps to improve mitigation of DNS abuse, it’s helpful to reflect briefly on the origins of this issue within ICANN, and to recognize the various and relevant community inputs to our ongoing work.
As a starting point, it’s important to understand ICANN’s central role in preserving the security, stability, resiliency and global interoperability of the internet’s unique identifier system, and also the limitations established within ICANN’s bylaws. ICANN’s primary mission is to ensure the stable and secure operation of the internet’s unique identifier systems, but as expressly stated in its bylaws, ICANN “shall not regulate (i.e., impose rules and restrictions on) services that use the internet’s unique identifiers or the content that such services carry or provide, outside the express scope of Section 1.1(a).” As such, ICANN’s role is important, but limited, when considering the full range of possible definitions of “DNS Abuse,” and developing a comprehensive understanding of security threat categories and the roles and responsibilities of various players in the internet infrastructure ecosystem is required.
In support of this important work, ICANN’s generic top-level domain (gTLD) contracted parties (registries and registrars) continue to engage with ICANN, and with other stakeholders and community interest groups, to address key factors related to effective and appropriate DNS security threat mitigation, including:
To better understand the various roles, responsibilities and processes, it’s important to first define illegal and abusive online activity. While perspectives may vary across our wide range of interest groups, the emerging consensus on definitions and terminology is that these activities can be categorized as DNS Security Threats, Infrastructure Abuse, Illegal Content, or Abusive Content, with ICANN’s remit generally limited to the first two categories.
Behavior within each of these categories constitutes abuse, and it is incumbent on members of the community to actively work to combat and mitigate these behaviors where they have the capability, expertise and responsibility to do so. We recognize the benefit of coordination with other entities, including ICANN within its bylaw-mandated remit, across their respective areas of responsibility.
The ICANN Organization has been actively involved in advancing work on DNS abuse, including the 2017 initiation of the Domain Abuse Activity Reporting (DAAR) system by the Office of the Chief Technology Officer. DAAR is a system for studying and reporting on domain name registration and security threats across top-level domain (TLD) registries, with an overarching purpose to develop a robust, reliable, and reproducible methodology for analyzing security threat activity, which the ICANN community may use to make informed policy decisions. The first DAAR reports were issued in January 2018 and they are updated monthly. Also in 2017, ICANN published its “Framework for Registry Operators to Address Security Threats,” which provides helpful guidance to registries seeking to improve their own DNS security posture.
The ICANN Organization also plays an important role in enforcing gTLD contract compliance and implementing policies developed by the community via its bottom-up, multi-stakeholder processes. For example, over the last several years, it has conducted registry and registrar audits of the anti-abuse provisions in the relevant agreements.
The ICANN Organization has also been a catalyst for increased community attention and action on DNS abuse, including initiating the DNS Security Facilitation Initiative Technical Study Group, which was formed to investigate mechanisms to strengthen collaboration and communication on security and stability issues related to the DNS. Over the last two years, there have also been multiple ICANN cross-community meeting sessions dedicated to the topic, including the most recent session hosted by the ICANN Board during its Annual General Meeting in October 2021. Also, in 2021, ICANN formalized its work on DNS abuse into a dedicated program within the ICANN Organization. These enforcement and compliance responsibilities are very important to ensure that all of ICANN’s contracted parties are living up to their obligations, and that any so-called “bad actors” are identified and remediated or de-accredited and removed from serving the gTLD registry or registrar markets.
The ICANN Organization continues to develop new initiatives to help mitigate DNS security threats, including: (1) expanding DAAR to integrate some country code TLDs, and to eventually include registrar-level reporting; (2) work on COVID domain names; (3) contributions to the development of a Domain Generating Algorithms Framework and facilitating waivers to allow registries and registrars to act on imminent security threats, including botnets at scale; and (4) plans for the ICANN Board to establish a DNS abuse caucus.
As early as 2009, the ICANN community began to identify the need for additional safeguards to help address DNS abuse and security threats, and those community inputs increased over time and have reached a crescendo over the last two years. In the early stages of this community dialogue, the ICANN Governmental Advisory Committee, via its Public Safety Working Group, identified the need for additional mechanisms to address “criminal activity in the registration of domain names.” In the context of renegotiation of the Registrar Accreditation Agreement between ICANN and accredited registrars, and the development of the New gTLD Base Registry Agreement, the GAC played an important and influential role in highlighting this need, providing formal advice to the ICANN Board, which resulted in new requirements for gTLD registry and registrar operators, and new contractual compliance requirements for ICANN.
Following the launch of the 2012 round of new gTLDs, and the finalization of the 2013 amendments to the RAA, several ICANN bylaw-mandated review teams engaged further on the issue of DNS Abuse. These included the Competition, Consumer Trust and Consumer Choice Review Team (CCT-RT), and the second Security, Stability and Resiliency Review Team (SSR2-RT). Both final reports identified and reinforced the need for additional tools to help measure and combat DNS abuse. Also, during this timeframe, the GAC, along with the At-Large Advisory Committee and the Security and Stability Advisory Committee, issued their own respective communiques and formal advice to the ICANN Board reiterating or reinforcing past statements, and providing support for recommendations in the various Review Team reports. Most recently, the SSAC issued SAC 115 titled “SSAC Report on an Interoperable Approach to Addressing Abuse Handling in the DNS.” These ICANN community group inputs have been instrumental in bringing additional focus and/or clarity to the topic of DNS abuse, and have encouraged ICANN and its gTLD registries and registrars to look for improved mechanisms to address the types of abuse within our respective remits.
During 2020 and 2021, ICANN’s gTLD contracted parties have been constructively engaged with other parts of the ICANN community, and with ICANN Org, to advance improved understanding on the topic of DNS security threats, and to identify new and improved mechanisms to enhance the security, stability and resiliency of the domain name registration and resolution systems. Collectively, the registries and registrars have engaged with nearly all groups represented in the ICANN community, and we have produced important documents related to DNS abuse definitions, registry actions, registrar abuse reporting, domain generating algorithms, and trusted notifiers. These all represent significant steps forward in framing the context of the roles, responsibilities and capabilities of ICANN’s gTLD contracted parties, and, consistent with our Letter of Intent commitments, Verisign has been an important contributor, along with our partners, in these Contracted Party House initiatives.
In addition, the gTLD contracted parties and ICANN Organization continue to engage constructively on a number of fronts, including upcoming work on standardized registry reporting, which will help result in better data on abuse mitigation practices that will help to inform community work, future reviews, and provide better visibility into the DNS security landscape.
It is important to note that groups outside of ICANN’s immediate multi-stakeholder community have contributed significantly to the topic of DNS abuse mitigation:
Internet & Jurisdiction Policy Network
The Internet & Jurisdiction Policy Network is a multi-stakeholder organization addressing the tension between the cross-border internet and national jurisdictions. Its secretariat facilitates a global policy process engaging over 400 key entities from governments, the world’s largest internet companies, technical operators, civil society groups, academia and international organizations from over 70 countries. The I&JP has been instrumental in developing multi-stakeholder inputs on issues such as trusted notifier, and Verisign has been a long-time contributor to that work since the I&JP’s founding in 2012.
DNS Abuse Institute
The DNS Abuse Institute was formed in 2021 to develop “outcomes-based initiatives that will create recommended practices, foster collaboration and develop industry-shared solutions to combat the five areas of DNS Abuse: malware, botnets, phishing, pharming, and related spam.” The Institute was created by Public Interest Registry, the registry operator for the .org TLD.
Global Cyber Alliance
The Global Cyber Alliance is a nonprofit organization dedicated to making the internet a safer place by reducing cyber risk. The GCA builds programs, tools and partnerships to sustain a trustworthy internet to enable social and economic progress for all.
ECO “topDNS” DNS Abuse Initiative
Eco is the largest association of the internet industry in Europe. Eco is a long-standing advocate of an “Internet with Responsibility” and of self-regulatory approaches, such as the DNS Abuse Framework. The eco “topDNS” initiative will help bring together stakeholders with an interest in combating and mitigating DNS security threats, and Verisign is a supporter of this new effort.
Other Community Groups
Verisign contributes to the anti-abuse, technical and policy communities: We continuously engage with ICANN and an array of other industry partners to help ensure the continued safe and secure operation of the DNS. For example, Verisign is actively engaged in anti-abuse, technical and policy communities such as the Anti-Phishing and Messaging, Malware and Mobile Anti-Abuse Working Groups, FIRST and the Internet Engineering Task Force.
As a leader in the domain name industry and DNS ecosystem, Verisign supports and has contributed to the cross-community efforts enumerated above. In addition, Verisign also engages directly by:
An important concept and approach for mitigating illegal and abusive activity online is the ability to engage with and rely upon third-party “trusted notifiers” to identify and report such incidents at the appropriate level in the DNS ecosystem. Verisign has supported and been engaged in the good work of the Internet & Jurisdiction Policy Network since its inception, and we’re encouraged by its recent progress on trusted notifier framing. As mentioned earlier, there are some key questions to be addressed as we consider the viability of engaging trusted notifiers or building trusting notifier entities, to help mitigate illegal and abusive online activity.
Verisign’s recent experience with the U.S. government (NTIA and FDA) in combating illegal online opioid sales has been very helpful in illuminating a possible approach for third-party trusted notifier engagement. As noted, we have also benefited from direct engagement with the Internet Watch Foundation and law enforcement in combating CSAM. These recent examples of third-party engagement have underscored the value of a well-formed and executed notification regime, supported by clear expectations, due diligence and due process.
Discussions around trusted notifiers and an appropriate framework for engagement are under way, and Verisign recently engaged with other registries and registrars to lead the development of such a framework for further discussion within the ICANN community. We have significant expertise and experience as an infrastructure provider within our areas of technical, legal and contractual responsibility, and we are aggressive in protecting our operations from bad actors. But in matters related to illegal or abusive content, we need and value contributions from third parties to appropriately identify such behavior when supported by necessary evidence and due diligence. Precisely how such third-party notifications can be formalized and supported at scale is an open question, but one that requires further exploration and work. Verisign is committed to continuing to contribute to these ongoing discussions as we work to mitigate illegal and abusive threats to the security, stability and resiliency of the internet.
Over the last several years, DNS abuse and DNS-related security threat mitigation has been a very important topic of discussion in and around the ICANN community. In cooperation with ICANN, contracted parties, and other groups within the ICANN community, the DNS ecosystem including Verisign has been constructively engaged in developing a common understanding and practical work to advance these efforts, with a goal of meaningfully reducing the level and impact of malicious activity in the DNS. In addition to its contractual compliance functions, ICANN’s contributions have been important in helping to advance this important work and it continues to have a critical coordination and facilitation function that brings the ICANN community together on this important topic. The ICANN community’s recent focus on DNS abuse has been helpful, significant progress has been made, and more work is needed to ensure continued progress in mitigating DNS security threats. As we look ahead to 2022, we are committed to collaborating constructively with ICANN and the ICANN community to deliver on these important goals.
The post Ongoing Community Work to Mitigate Domain Name System Security Threats appeared first on Verisign Blog.
The Domain Name System has provided the fundamental service of mapping internet names to addresses from almost the earliest days of the internet’s history. Billions of internet-connected devices use DNS continuously to look up Internet Protocol addresses of the named resources they want to connect to — for instance, a website such as blog.verisign.com. Once a device has the resource’s address, it can then communicate with the resource using the internet’s routing system.
Just as ensuring that DNS is secure, stable and resilient is a priority for Verisign, so is making sure that the routing system has these characteristics. Indeed, DNS itself depends on the internet’s routing system for its communications, so routing security is vital to DNS security too.
To better understand how these challenges can be met, it’s helpful to step back and remember what the internet is: a loosely interconnected network of networks that interact with each other at a multitude of locations, often across regions or countries.
Packets of data are transmitted within and between those networks, which utilize a collection of technical standards and rules called the IP suite. Every device that connects to the internet is uniquely identified by its IP address, which can take the form of either a 32-bit IPv4 address or a 128-bit IPv6 address. Similarly, every network that connects to the internet has an Autonomous System Number, which is used by routing protocols to identify the network within the global routing system.
The primary job of the routing system is to let networks know the available paths through the internet to specific destinations. Today, the system largely relies on a decentralized and implicit trust model — a hallmark of the internet’s design. No centralized authority dictates how or where networks interconnect globally, or which networks are authorized to assert reachability for an internet destination. Instead, networks share knowledge with each other about the available paths from devices to destination: They route “by rumor.”
Under the Border Gateway Protocol, the internet’s de-facto inter-domain routing protocol, local routing policies decide where and how internet traffic flows, but each network independently applies its own policies on what actions it takes, if any, with data that connects through its network.
BGP has scaled well over the past three decades because 1) it operates in a distributed manner, 2) it has no central point of control (nor failure), and 3) each network acts autonomously. While networks may base their routing policies on an array of pricing, performance and security characteristics, ultimately BGP can use any available path to reach a destination. Often, the choice of route may depend upon personal decisions by network administrators, as well as informal assessments of technical and even individual reliability.
Two prominent types of operational and security incidents occur in the routing system today: route hijacks and route leaks. Route hijacks reroute internet traffic to an unintended destination, while route leaks propagate routing information to an unintended audience. Both types of incidents can be accidental as well as malicious.
Preventing route hijacks and route leaks requires considerable coordination in the internet community, a concept that fundamentally goes against the BGP’s design tenets of distributed action and autonomous operations. A key characteristic of BGP is that any network can potentially announce reachability for any IP addresses to the entire world. That means that any network can potentially have a detrimental effect on the global reachability of any internet destination.
Fortunately, there is a solution already receiving considerable deployment momentum, the Resource Public Key Infrastructure. RPKI provides an internet number resource certification infrastructure, analogous to the traditional PKI for websites. RPKI enables number resource allocation authorities and networks to specify Route Origin Authorizations that are cryptographically verifiable. ROAs can then be used by relying parties to confirm the routing information shared with them is from the authorized origin.
RPKI is standards-based and appears to be gaining traction in improving BGP security. But it also brings new challenges.
Specifically, RPKI creates new external and third-party dependencies that, as adoption continues, ultimately replace the traditionally autonomous operation of the routing system with a more centralized model. If too tightly coupled to the routing system, these dependencies may impact the robustness and resilience of the internet itself. Also, because RPKI relies on DNS and DNS depends on the routing system, network operators need to be careful not to introduce tightly coupled circular dependencies.
Regional Internet Registries, the organizations responsible for top-level number resource allocation, can potentially have direct operational implications on the routing system. Unlike DNS, the global RPKI as deployed does not have a single root of trust. Instead, it has multiple trust anchors, one operated by each of the RIRs. RPKI therefore brings significant new security, stability and resiliency requirements to RIRs, updating their traditional role of simply allocating ASNs and IP addresses with new operational requirements for ensuring the availability, confidentiality, integrity, and stability of this number resource certification infrastructure.
As part of improving BGP security and encouraging adoption of RPKI, the routing community started the Mutually Agreed Norms for Routing Security initiative in 2014. Supported by the Internet Society, MANRS aims to reduce the most common routing system vulnerabilities by creating a culture of collective responsibility towards the security, stability and resiliency of the global routing system. MANRS is continuing to gain traction, guiding internet operators on what they can do to make the routing system more reliable.
Routing by rumor has served the internet well, and a decade ago it may have been ideal because it avoided systemic dependencies. However, the increasingly critical role of the internet and the evolving cyberthreat landscape require a better approach for protecting routing information and preventing route leaks and route hijacks. As network operators deploy RPKI with security, stability and resiliency, the billions of internet-connected devices that use DNS to look up IP addresses can then communicate with those resources through networks that not only share routing information with one another as they’ve traditionally done, but also do something more. They’ll make sure that the routing information they share and use is secure — and route without rumor.
The post Routing Without Rumor: Securing the Internet’s Routing System appeared first on Verisign Blog.
In part one of this issue of our Black Hat Asia NOC blog, you will find:
Cisco Meraki was asked by Black Hat Events to be the Official Wired and Wireless Network Equipment, for Black Hat Asia 2022, in Singapore, 10-13 May 2022; in addition to providing the Mobile Device Management (since Black Hat USA 2021), Malware Analysis (since Black Hat USA 2016), & DNS (since Black Hat USA 2017) for the Network Operations Center. We were proud to collaborate with NOC partners Gigamon, IronNet, MyRepublic, NetWitness and Palo Alto Networks.
To accomplish this undertaking in a few weeks’ time, after the conference had a green light with the new COVID protocols, Cisco Meraki and Cisco Secure leadership gave their full support to send the necessary hardware, software licenses and staff to Singapore. Thirteen Cisco engineers deployed to the Marina Bay Sands Convention Center, from Singapore, Australia, United States and United Kingdom; with two additional remote Cisco engineers from the United States.
From attendee to press to volunteer – coming back to Black Hat as NOC volunteer by Humphrey Cheung
Loops in the networking world are usually considered a bad thing. Spanning tree loops and routing loops happen in an instant and can ruin your whole day, but over the 2nd week in May, I made a different kind of loop. Twenty years ago, I first attended the Black Hat and Defcon conventions – yay Caesars Palace and Alexis Park – a wide-eyed tech newbie who barely knew what WEP hacking, Driftnet image stealing and session hijacking meant. The community was amazing and the friendships and knowledge I gained, springboarded my IT career.
In 2005, I was lucky enough to become a Senior Editor at Tom’s Hardware Guide and attended Black Hat as accredited press from 2005 to 2008. From writing about the latest hardware zero-days to learning how to steal cookies from the master himself, Robert Graham, I can say, without any doubt, Black Hat and Defcon were my favorite events of the year.
Since 2016, I have been a Technical Solutions Architect at Cisco Meraki and have worked on insanely large Meraki installations – some with twenty thousand branches and more than a hundred thousand access points, so setting up the Black Hat network should be a piece of cake right? Heck no, this is unlike any network you’ve experienced!
As an attendee and press, I took the Black Hat network for granted. To take a phrase that we often hear about Cisco Meraki equipment, “it just works”. Back then, while I did see access points and switches around the show, I never really dived into how everything was set up.
A serious challenge was to secure the needed hardware and ship it in time for the conference, given the global supply chain issues. Special recognition to Jeffry Handal for locating the hardware and obtaining the approvals to donate to Black Hat Events. For Black Hat Asia, Cisco Meraki shipped:
Let’s start with availability. iPads and iPhones are scanning QR codes to register attendees. Badge printers need access to the registration system. Training rooms all have their separate wireless networks – after all, Black Hat attendees get a baptism by fire on network defense and attack. To top it all off, hundreds of attendees gulped down terabytes of data through the main conference wireless network.
All this connectivity was provided by Cisco Meraki access points, switches, security appliances, along with integrations into SecureX, Umbrella and other products. We fielded a literal army of engineers to stand up the network in less than two days… just in time for the training sessions on May 10 to 13th and throughout the Black Hat Briefings and Business Hall on May 12 and 13.
Let’s talk security and visibility. For a few days, the Black Hat network is probably one of the most hostile in the world. Attendees learn new exploits, download new tools and are encouraged to test them out. Being able to drill down on attendee connection details and traffic was instrumental on ensuring attendees didn’t get too crazy.
On the wireless front, we made extensive use of our Radio Profiles to reduce interference by tuning power and channel settings. We enabled band steering to get more clients on the 5GHz bands versus 2.4GHz and watched the Location Heatmap like a hawk looking for hotspots and dead areas. Handling the barrage of wireless change requests – enable or disabling this SSID, moving VLANs (Virtual Local Area Networks), enabling tunneling or NAT mode, – was a snap with the Meraki Dashboard.
While the Cisco Meraki Dashboard is extremely powerful, we happily supported exporting of logs and integration in major event collectors, such as the NetWitness SIEM and even the Palo Alto firewall. On Thursday morning, the NOC team found a potentially malicious Macbook Pro performing vulnerability scans against the Black Hat management network. It is a balance, as we must allow trainings and demos connect to malicious websites, download malware and execute. However, there is a Code of Conduct to which all attendees are expected to follow and is posted at Registration with a QR code.
The Cisco Meraki network was exporting syslog and other information to the Palo Alto firewall, and after correlating the data between the Palo Alto Dashboard and Cisco Meraki client details page, we tracked down the laptop to the Business Hall.
We briefed the NOC management, who confirmed the scanning was violation of the Code of Conduct, and the device was blocked in the Meraki Dashboard, with the instruction to come to the NOC.
The device name and location made it very easy to determine to whom it belonged in the conference attendees.
A delegation from the NOC went to the Business Hall, politely waited for the demo to finish at the booth and had a thoughtful conversation with the person about scanning the network.
Coming back to Black Hat as a NOC volunteer was an amazing experience. While it made for long days with little sleep, I really can’t think of a better way to give back to the conference that helped jumpstart my professional career.
Meraki MR, MS, MX and Systems Manager by Paul Fidler
With the invitation extended to Cisco Meraki to provide network access, both from a wired and wireless perspective, there was an opportunity to show the value of the Meraki platform integration capabilities of Access Points (AP), switches, security appliances and mobile device management.
The first amongst this was the use of the Meraki API. We were able to import the list of MAC addresses of the Meraki MRs, to ensure that the APs were named appropriately and tagged, using a single source of truth document shared with the NOC management and partners, with the ability to update en masse at any time.
On the first day of NOC setup, the Cisco team walked around the venue to discuss AP placements with the staff of the Marina Bay Sands. Whilst we had a simple Powerpoint showing approximate AP placements for the conference, it was noted that the venue team had an incredibly detailed floor plan of the venue. This was acquired in PDF and uploaded into the Meraki Dashboard; and with a little fine tuning, aligned perfectly with the Google Map.
Meraki APs were then placed physically in the venue meeting and training rooms, and very roughly on the floor plan. One of the team members then used a printout of the floor plan to mark accurately the placement of the APs. Having the APs named, as mentioned above, made this an easy task (walking around the venue notwithstanding!). This enabled accurate heatmap capability.
The Location Heatmap was a new capability for Black Hat NOC, and the client data visualized in NOC continued to be of great interest to the Black Hat management team, such as which training, briefing and sponsor booths drew the most interest.
The ability to use SSID Availability was incredibly useful. It allowed ALL of the access points to be placed within a single Meraki Network. Not only that, because of the training events happening during the week, as well as TWO dedicated SSIDs for the Registration and lead tracking iOS devices (more of which later), one for initial provisioning (which was later turned off), and one for certificated based authentication, for a very secure connection.
We were able to monitor the number of connected clients, network usage, the persons passing by the network and location analytics, throughout the conference days. We provided visibility access to the Black Hat NOC management and the technology partners (along with full API access), so they could integrate with the network platform.
Meraki alerts are exactly that: the ability to be alerted to something that happens in the Dashboard. Default behavior is to be emailed when something happens. Obviously, emails got lost in the noise, so a web hook was created in SecureX orchestration to be able to consume Meraki alerts and send it to Slack (the messaging platform within the Black Hat NOC), using the native template in the Meraki Dashboard. The first alert to be created was to be alerted if an AP went down. We were to be alerted after five minutes of an AP going down, which is the smallest amount of time available before being alerted.
The bot was ready; however, the APs stayed up the entire time!
Applying the lessons learned at Black Hat Europe 2021, for the initial configuration of the conference iOS devices, we set up the Registration iPads and lead retrieval iPhones with Umbrella, Secure Endpoint and WiFi config. Devices were, as in London, initially configured using Apple Configurator, to both supervise and enroll the devices into a new Meraki Systems Manager instance in the Dashboard.
However, Black Hat Asia 2022 offered us a unique opportunity to show off some of the more integrated functionality.
System Apps were hidden and various restrictions (disallow joining of unknown networks, disallow tethering to computers, etc.) were applied, as well as a standard WPA2 SSID for the devices that the device vendor had set up (we gave them the name of the SSID and Password).
We also stood up a new SSID and turned-on Sentry, which allows you to provision managed devices with, not only the SSID information, but also a dynamically generated certificate. The certificate authority and radius server needed to do this 802.1x is included in the Meraki Dashboard automatically! When the device attempts to authenticate to the network, if it doesn’t have the certificate, it doesn’t get access. This SSID, using SSID availability, was only available to the access points in the Registration area.
Using the Sentry allowed us to easily identify devices in the client list.
One of the alerts generated with SysLog by Meraki, and then viewable and correlated in the NetWitness SIEM, was a ‘De Auth’ event that came from an access point. Whilst we had the IP address of the device, making it easy to find, because the event was a de auth, meaning 802.1x, it narrowed down the devices to JUST the iPads and iPhones used for registration (as all other access points were using WPA2). This was further enhanced by seeing the certificate name used in the de-auth:
Along with the certificate name was the name of the AP: R**
One of the inherent problems with iOS device location is when devices are used indoors, as GPS signals just aren’t strong enough to penetrate modern buildings. However, because the accurate location of the Meraki access points was placed on the floor plan in the Dashboard, and because the Meraki Systems Manager iOS devices were in the same Dashboard organization as the access points, we got to see a much more accurate map of devices compared to Black Hat Europe 2021 in London.
When the conference Registration closed on the last day and the Business Hall Sponsors all returned their iPhones, we were able to remotely wipe all of the devices, removing all attendee data, prior to returning to the device contractor.
Meraki Scanning API Receiver by Christian Clasen
Leveraging the ubiquity of both WiFi and Bluetooth radios in mobile devices and laptops, Cisco Meraki’s wireless access points can detect and provide location analytics to report on user foot traffic behavior. This can be useful in retail scenarios where customers desire location and movement data to better understand the trends of engagement in their physical stores.
Meraki can aggregate real-time data of detected WiFi and Bluetooth devices and triangulate their location rather precisely when the floorplan and AP placement has been diligently designed and documented. At the Black Hat Asia conference, we made sure to properly map the AP locations carefully to ensure the highest accuracy possible.
This scanning data is available for clients whether they are associated with the access points or not. At the conference, we were able to get very detailed heatmaps and time-lapse animations representing the movement of attendees throughout the day. This data is valuable to conference organizers in determining the popularity of certain talks, and the attendance at things like keynote presentations and foot traffic at booths.
This was great for monitoring during the event, but the Dashboard would only provide 24-hours of scanning data, limiting what we could do when it came to long-term data analysis. Fortunately for us, Meraki offers an API service we can use to capture this treasure trove offline for further analysis. We only needed to build a receiver for it.
The Scanning API requires that the customer stand up infrastructure to store the data, and then register with the Meraki cloud using a verification code and secret. It is composed of two endpoints:
Returns the validator string in the response body
[GET] https://yourserver/
This endpoint is called by Meraki to validate the receiving server. It expects to receive a string that matches the validator defined in the Meraki Dashboard for the respective network.
Accepts an observation payload from the Meraki cloud
[POST] https://yourserver/
This endpoint is responsible for receiving the observation data provided by Meraki. The URL path should match that of the [GET] request, used for validation.
The response body will consist of an array of JSON objects containing the observations at an aggregate per network level. The JSON will be determined based on WiFi or BLE device observations as indicated in the type parameter.
What we needed was a simple technology stack that would contain (at minimum) a publicly accessible web server capable of TLS. In the end, the simplest implementation was a web server written using Python Flask, in a Docker container, deployed in AWS, connected through ngrok.
In fewer than 50 lines of Python, we could accept the inbound connection from Meraki and reply with the chosen verification code. We would then listen for the incoming POST data and dump it into a local data store for future analysis. Since this was to be a temporary solution (the duration of the four-day conference), the thought of registering a public domain and configuring TLS certificates wasn’t particularly appealing. An excellent solution for these types of API integrations is ngrok (https://ngrok.com/). And a handy Python wrapper was available for simple integration into the script (https://pyngrok.readthedocs.io/en/latest/index.html).
We wanted to easily re-use this stack next time around, so it only made sense to containerize it in Docker. This way, the whole thing could be stood up at the next conference, with one simple command. The image we ended up with would mount a local volume, so that the ingested data would remain persistent across container restarts.
Ngrok allowed us to create a secure tunnel from the container that could be connected in the cloud to a publicly resolvable domain with a trusted TLS certificate generated for us. Adding that URL to the Meraki Dashboard is all we needed to do start ingesting the massive treasure trove of location data from the Aps – nearly 1GB of JSON over 24 hours.
This “quick and dirty” solution illustrated the importance of interoperability and openness in the technology space when enabling security operations to gather and analyze the data they require to monitor and secure events like Black Hat, and their enterprise networks as well. It served us well during the conference and will certainly be used again going forward.
Check out part two of the blog, Black Hat Asia 2022 Continued: Cisco Secure Integrations, where we will discuss integrating NOC operations and making your Cisco Secure deployment more effective:
Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.
Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).
For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels
In part one of our Black Hat Asia 2022 NOC blog, we discussed building the network with Meraki:
In this part two, we will discuss:
SecureX: Bringing Threat Intelligence Together by Ian Redden
In addition to the Meraki networking gear, Cisco Secure also shipped two Umbrella DNS virtual appliances to Black Hat Asia, for internal network visibility with redundancy, in addition to providing:
Cisco Secure Threat Intelligence (correlated through SecureX)
Donated Partner Threat Intelligence (correlated through SecureX)
alphaMountain.ai threat intelligence
Open-Source Threat Intelligence (correlated through SecureX)
Continued Integrations from past Black Hat events
New Integrations Created at Black Hat Asia 2022
Device type spoofing event by Jonny Noble
Overview
During the conference, a NOC Partner informed us that they received an alert from May 10 concerning an endpoint client that accessed two domains that they saw as malicious:
Client details from Partner:
Based on the user agent, the partner derived that the device type was an Apple iPhone.
SecureX analysis
Umbrella Investigate analysis
Umbrella Investigate positions both domains as low risk, both registered recently in Poland, and both hosted on the same IP:
Despite the low-risk score, the nameservers have high counts of malicious associated domains:
Targeting users in ASA, UK, and Nigeria:
Meraki analysis
Based on the time of the incident, we can trace the device’s location (based on its IP address). This is thanks to the effort we invested in mapping out the exact location of all Meraki APs, which we deployed across the convention center with an overlay of the event map covering the area of the event:
Further analysis and conclusions
The device name (LAPTOP-8MLGXXXXXX) and MAC address seen (f4:XX:XX:XX:XX:XX) both matched across the partner and Meraki, so there was no question that we were analyzing the same device.
Based on the useragent captured by the partner, the device type was an Apple iPhone. However, Meraki was reporting the Device and its OS as “Intel, Android”
A quick look up for the MAC address confirmed that the OUI (organizationally unique identifier) for f42679 was Intel Malaysia, making it unlikely that this was an Apple iPhone.
The description for the training “Web Hacking Black Belt Edition” can be seen here:
https://www.blackhat.com/asia-22/training/schedule/#web-hacking-black-belt-edition–day-25388
It is highly likely that the training content included the use of tools and techniques for spoofing the visibility of useragent or device type.
There is also a high probability that the two domains observed were used as part of the training activity, rather than this being part of a live attack.
It is clear that integrating the various Cisco technologies (Meraki wireless infrastructure, SecureX, Umbrella, Investigate) used in the investigation of this incident, together with the close partnership and collaboration of our NOC partners, positioned us where we needed to be and provided us with the tools we needed to swiftly collect the data, join the dots, make conclusions, and successfully bring the incident to closure.
Self Service with SecureX Orchestration and Slack by Matt Vander Horst
Overview
Since Meraki was a new platform for much of the NOC’s staff, we wanted to make information easier to gather and enable a certain amount of self-service. Since the Black Hat NOC uses Slack for messaging, we decided to create a Slack bot that NOC staff could use to interact with the Meraki infrastructure as well as Palo Alto Panorama using the SecureX Orchestration remote appliance. When users communicate with the bot, webhooks are sent to Cisco SecureX Orchestration to do the work on the back end and send the results back to the user.
Design
Here’s how this integration works:
Workflow #1: Handle Slash Commands
Slash commands are a special type of message built into Slack that allow users to interact with a bot. When a Slack user executes a slash command, the command and its arguments are sent to SecureX Orchestration where a workflow handles the command. The table below shows a summary of the slash commands our bot supported for Black Hat Asia 2022:
Here’s a sample of a portion of the SecureX Orchestration workflow that powers the above commands:
And here’s a sample of firewall logs as returned from the “/pan_traffic_history” command:
Workflow #2: Handle Interactivity
A more advanced form of user interaction comes in the form of Slack blocks. Instead of including a command’s arguments in the command itself, you can execute the command and Slack will present you with a form to complete, like this one for the “/update_vlan” command:
These forms are much more user friendly and allow information to be pre-populated for the user. In the example above, the user can simply select the switch to configure from a drop-down list instead of having to enter its name or serial number. When the user submits one of these forms, a webhook is sent to SecureX Orchestration to execute a workflow. The workflow takes the requested action and sends back a confirmation to the user:
Conclusion
While these two workflows only scratched the surface of what can be done with SecureX Orchestration webhooks and Slack, we now have a foundation that can be easily expanded upon going forward. We can add additional commands, new forms of interactivity, and continue to enable NOC staff to get the information they need and take necessary action. The goal of orchestration is to make life simpler, whether it is by automating our interactions with technology or making those interactions easier for the user.
Future Threat Vectors to Consider – Cloud App Discovery by Alejo Calaoagan
Since 2017 (starting in Black Hat USA – Las Vegas), Cisco Umbrella has provided DNS security to the Black Hat attendee network, added layers of traffic visibility previously not seen. Our efforts have largely been successful, identifying thousands of threats over the years and mitigating them via Umbrella’s blocking capabilities when necessary. This was taken a step further at Black Hat London 2021, where we introduced our Virtual Appliances to provide source IP attribution to the devices making requests.
Here at Black Hat Asia 2022, we’ve been noodling on additional ways to provide advanced protection for future shows, and it starts with Umbrella’s Cloud Application Discovery’s feature, which identified 2,286 unique applications accessed by users on the attendee network across the four-day conference. Looking at a snapshot from a single day of the show, Umbrella captured 572,282 DNS requests from all cloud apps, with over 42,000 posing either high or very high risk.
Digging deeper into the data, we see not only the types of apps being accessed…
…but also see the apps themselves…
…and we can flag apps that look suspicious.
We also include risk downs breaks by category…
…and drill downs on each.
While this data alone won’t provide enough information to take action, including this data in analysis, something we have been doing, may provide a window into new threat vectors that may have previously gone unseen. For example, if we identify a compromised device infected with malware or a device attempting to access things on the network that are restricted, we can dig deeper into the types of cloud apps those devices are using and correlate that data with suspicious request activity, potential uncovering tools we should be blocking in the future.
I can’t say for certain how much this extra data set will help us uncover new threats, but, with Black Hat USA just around the corner, we’ll find out soon.
Using SecureX sign-on to streamline access to the Cisco Stack at Black Hat by Adi Sankar
From five years ago to now, Cisco has tremendously expanded our presence at Black Hat to include a multitude of products. Of course, sign-on was simple when it was just one product (Secure Malware Analytics) and one user to log in. When it came time to add a new technology to the stack it was added separately as a standalone product with its own method of logging in. As the number of products increased, so did the number of Cisco staff at the conference to support these products. This means sharing usernames and passwords became tedious and not to mention insecure, especially with 15 Cisco staff, plus partners, accessing the platforms.
The Cisco Secure stack at Black Hat includes SecureX, Umbrella, Malware Analytics, Secure Endpoint (iOS clarity), and Meraki. All of these technologies support using SAML SSO natively with SecureX sign-on. This means that each of our Cisco staff members can have an individual SecureX sign-on account to log into the various consoles. This results in better role-based access control, better audit logging and an overall better login experience. With SecureX sign-on we can log into all the products only having to type a password one time and approve one Cisco DUO Multi-Factor Authentication (MFA) push.
How does this magic work behind the scenes? It’s actually rather simple to configure SSO for each of the Cisco technologies, since they all support SecureX sign-on natively. First and foremost, you must set up a new SecureX org by creating a SecureX sign-on account, creating a new organization and integrating at least one Cisco technology. In this case I created a new SecureX organization for Black Hat and added the Secure Endpoint module, Umbrella Module, Meraki Systems Manager module and the Secure Malware Analytics module. Then from Administration à Users in SecureX, I sent an invite to the Cisco staffers that would be attending the conference, which contained a link to create their account and join the Blackhat SecureX organization. Next let’s take a look at the individual product configurations.
Meraki:
In the Meraki organization settings enable SecureX sign-on. Then under Organization à Administrators add a new user and specify SecureX sign-on as the authentication method. Meraki even lets you limit users to particular networks and set permission levels for those networks. Accepting the email invitation is easy since the user should already be logged into their SecureX sign-on account. Now, logging into Meraki only requires an email address and no password or additional DUO push.
Umbrella:
Under Admin à Authentication configure SecureX sign-on which requires a test login to ensure you can still login before using SSO for authentication to Umbrella. There is no need to configure MFA in Umbrella since SecureX sign-on comes with built in DUO MFA. Existing users and any new users added in Umbrella under Admin à Accounts will now be using SecureX sign-on to login to Umbrella. Logging into Umbrella is now a seamless launch from the SecureX dashboard or from the SecureX ribbon in any of the other consoles.
Secure Malware Analytics:
A Secure Malware Analytics organization admin can create new users in their Threat Grid tenant. This username is unique to Malware Analytics, but it can be connected to a SecureX sign-on account to take advantage of the seamless login flow. From the email invitation the user will create a password for their Malware Analytics user and accept the EULA. Then in the top right under My Malware Analytics Account, the user has an option to connect their SecureX sign-on account which is a one click process if already signed in with SecureX sign-on. Now when a user navigates to Malware Analytics login page, simply clicking “Login with SecureX Sign-On” will grant them access to the console.
Secure Endpoint:
The Secure Endpoint deployment at Blackhat is limited to IOS clarity through Meraki Systems Manager for the conference IOS devices. Most of the asset information we need about the iPhones/iPads is brought in through the SecureX Device Insights inventory. However, for initial configuration and to view device trajectory it is required to log into Secure Endpoint. A new Secure Endpoint account can be created under Accounts à Users and an invite is sent to corresponding email address. Accepting the invite is a smooth process since the user is already signed in with SecureX sign-on. Privileges for the user in the Endpoint console can be granted from within the user account.
Conclusion:
To sum it all up, SecureX sign-on is the standard for the Cisco stack moving forward. With a new SecureX organization instantiated using SecureX sign-on any new users to the Cisco stack at Black Hat will be using SecureX sign-on. SecureX sign-on has helped our user management be much more secure as we have expanded our presence at Black Hat. SecureX sign-on provides a unified login mechanism for all the products and modernized our login experience at the conference.
Malware Threat Intelligence made easy and available, with Cisco Secure Malware Analytics and SecureX by Ben Greenbaum
I’d gotten used to people’s reactions upon seeing SecureX in use for the first time. A few times at Black Hat, a small audience gathered just to watch us effortlessly correlate data from multiple threat intelligence repositories and several security sensor networks in just a few clicks in a single interface for rapid sequencing of events and an intuitive understanding of security events, situations, causes, and consequences. You’ve already read about a few of these instances above. Here is just one example of SecureX automatically putting together a chronological history of observed network events detected by products from two vendors (Cisco Umbrella and NetWitness) . The participation of NetWitness in this and all of our other investigations was made possible by our open architecture, available APIs and API specifications, and the creation of the NetWitness module described above.
In addition to the traffic and online activities of hundreds of user devices on the network, we were responsible for monitoring a handful of Black Hat-owned devices as well. Secure X Device Insights made it easy to access information about these assets, either en masse or as required during an ongoing investigation. iOS Clarity for Secure Endpoint and Meraki System Manager both contributed to this useful tool which adds business intelligence and asset context to SecureX’s native event and threat intelligence, for more complete and more actionable security intelligence overall.
SecureX is made possible by dozens of integrations, each bringing their own unique information and capabilities. This time though, for me, the star of the SecureX show was our malware analysis engine, Cisco Secure Malware Analytics (CSMA). Shortly before Black Hat Asia, the CSMA team released a new version of their SecureX module. SecureX can now query CSMA’s database of malware behavior and activity, including all relevant indicators and observables, as an automated part of the regular process of any investigation performed in SecureX Threat Response.
This capability is most useful in two scenarios:
1: determining if suspicious domains, IPs and files reported by any other technology had been observed in the analysis of any of the millions of publicly submitted file samples, or our own.
2: rapidly gathering additional context about files submitted to the analysis engine by the integrated products in the Black Hat NOC.
The first was a significant time saver in several investigations. In the example below, we received an alert about connections to a suspicious domain. In that scenario, our first course of action is to investigate the domain and any other observables reported with it (typically the internal and public IPs included in the alert). Due to the new CSMA module, we immediately discovered that the domain had a history of being contacted by a variety of malware samples, from multiple families, and that information, corroborated by automatically gathered reputation information from multiple sources about each of those files, gave us an immediate next direction to investigate as we hunted for evidence of those files being present in network traffic or of any traffic to other C&C resources known to be used by those families. From the first alert to having a robust, data-driven set of related signals to look for, took only minutes, including from SecureX partner Recorded Future, who donated a full threat intelligence license for the Black Hat NOC.
The other scenario, investigating files submitted for analysis, came up less frequently but when it did, the CSMA/SecureX integration was equally impressive. We could rapidly, nearly immediately, look for evidence of any of our analyzed samples in the environment across all other deployed SecureX-compatible technologies. That evidence was no longer limited to searching for the hash itself, but included any of the network resources or dropped payloads associated with the sample as well, easily identifying local targets who had not perhaps seen the exact variant submitted, but who had nonetheless been in contact with that sample’s Command and Control infrastructure or other related artifacts.
And of course, thanks to the presence of the ribbon in the CSMA UI, we could be even more efficient and do this with multiple samples at once.
SecureX greatly increased the efficiency of our small volunteer team, and certainly made it possible for us to investigate more alerts and events, and hunt for more threats, all more thoroughly, than we would have been able to without it. SecureX truly took this team to the next level, by augmenting and operationalizing the tools and the staff that we had at our disposal.
We look forward to seeing you at Black Hat USA in Las Vegas, 6-11 August 2022!
Acknowledgements: Special thanks to the Cisco Meraki and Cisco Secure Black Hat NOC team: Aditya Sankar, Aldous Yeung, Alejo Calaoagan, Ben Greenbaum, Christian Clasen, Felix H Y Lam, George Dorsey, Humphrey Cheung, Ian Redden, Jeffrey Chua, Jeffry Handal, Jonny Noble, Matt Vander Horst, Paul Fidler and Steven Fan.
Also, to our NOC partners NetWitness (especially David Glover), Palo Alto Networks (especially James Holland), Gigamon, IronNet (especially Bill Swearington), and the entire Black Hat / Informa Tech staff (especially Grifter ‘Neil Wyler’, Bart Stump, James Pope, Steve Fink and Steve Oldenbourg).
About Black Hat
For more than 20 years, Black Hat has provided attendees with the very latest in information security research, development, and trends. These high-profile global events and trainings are driven by the needs of the security community, striving to bring together the best minds in the industry. Black Hat inspires professionals at all career levels, encouraging growth and collaboration among academia, world-class researchers, and leaders in the public and private sectors. Black Hat Briefings and Trainings are held annually in the United States, Europe and Asia. More information is available at: blackhat.com. Black Hat is brought to you by Informa Tech.
Labtainers include more than 50 cyber lab exercises and tools to build your own. Import a single VM appliance or install on a Linux system and your students are done with provisioning and administrative setup, for these and future lab exercises.
Labtainers provide controlled and consistent execution environments in which students perform labs entirely within the confines of their computer, regardless of the Linux distribution and packages installed on the student's computer. Labtainers run on our [VM appliance][vm-appliancee], or on any Linux with Dockers installed. And Labtainers is available as cloud-based VMs, e.g., on Azure as described in the Student Guide.
See the Student Guide for installation and use, and the Instructor Guide for student assessment. Developing and customizing lab exercises is described in the Designer Guide. See the Papers for additional information about the framework. The Labtainers website, and downloads (including VM appliances with Labtainers pre-installed) are at https://nps.edu/web/c3o/labtainers.
Distribution created: 03/25/2022 09:37
Revision: v1.3.7c
Commit: 626ea075
Branch: master
Please see the licensing and distribution information in the docs/license.md file.
scripts/labtainers-student -- the work directory for running and testing student labs. You must be in that directory to run student labs.
scripts/labtainers-instructor -- the work directory for running and testing automated assessment and viewing student results.
labs -- Files specific to each of the labs
setup_scripts -- scripts for installing Labtainers and Docker and updating Labtainers
docs -- latex source for the labdesigner.pdf, and other documentation.
UI -- Labtainers lab editor source code (Java).
headless-lite -- scripts for managing Docker Workstation and cloud instances of Labtainers (systems that do not have native X11 servers.)
scripts/designer -- Tools for building new labs and managing base Docker images.
config -- system-wide configuration settings (these are not the lab-specific configuration settings.
distrib -- distribution support scripts, e.g., for publishing labs to the Docker hub.
testsets -- Test procedures and expected results. (Per-lab drivers for SimLab are not distributed).
pkg-mirrors -- utility scripts for internal NPS package mirroring to reduce external package pulling during tests and distribution.
Use the GitHub issue reports, or email me at mfthomps@nps.edu
Also see https://my.nps.edu/web/c3o/support1
The standard Labtainers distribution does not include files required for development of new labs. For those, run ./update-designer.sh from the labtainer/trunk/setup_scripts directory.
The installation script and the update-designer.sh script set environment variables, so you may want to logout/login, or start a new bash shell before using Labtainers the first time.
March 23, 2022
March 2, 2022
February 23, 2022
February 15, 2022
January 24, 2022
January 19, 2022
January 3, 2022
November 23, 2021
October 22, 2021
September 30, 2021
September 29, 2021
September 17, 2021
September 14, 2021
August 3, 2021
July 19, 2021
July 5, 2021
July 1, 2021
June 10, 2021
May 25, 2021
May 5, 2021
April 28, 2021
April 13, 2021
April 9, 2021
April 7, 2021
March 23, 2021
March 19, 2021
March 12, 2021
March 11, 2021
March 10, 2021
March 8, 2021
March 5, 2021
February 26, 2021
February 18, 2021
February 14, 2021
February 11, 2021
February 4, 2021
January 19, 2021
December 21, 2020
December 4, 2020
December 1, 2020
October 13, 2020
September 26, 2020
September 17, 2020
August 28, 2020
August 12, 2020
August 6, 2020
July 28, 2020
July 21, 2020
June 15, 2020
May 21, 2020
April 9, 2020
April 7, 2020
March 13, 2020
February 26, 2020
February 18, 2020
February 14, 2020
February 11, 2020
February 6, 2020
February 3, 2020
January 27, 2020
January 14, 2020
October 9, 2019
October 8, 2019
September 30, 2019
September 20, 2019
September 5, 2019
August 30, 2019
August 29, 2019
July 11, 2019
June 6, 2019
May 23, 2019
May 8, 2019
May 2, 2019
March 9, 2019
February 22, 2019
January 7, 2019
January 27, 2019
December 29, 2018
December 5, 2018
November 14, 2018
November, 5, 2018
October 22, 2018
October 12, 2018
October 10, 2018
September 28, 2018
September 12, 2018
September 7, 2018
September 5, 2018
September 4, 2018
August 23, 2018
August 23, 2018
August 21, 2018
August 17, 2018
August 15, 2018
August 15, 2018
August 9, 2018
August 7, 2018
August 1, 2018
July 30, 2018
July 25, 2018
July 24, 2018
July 20, 2018
July 12, 2018
July 10, 2018
July 6, 2018
June 27, 2018
June 21, 2018
June 19, 2018
June 14, 2018
June 15, 2018
June 13, 2018
June 11, 2018
June 2, 2018
May 31, 2018
May 30, 2018
May 25, 2018
May 21, 2018
May 18, 2018
May 15, 2018
May 11, 2018
May 9, 2018
May 8, 2018
May 7, 2018
April 26, 2018
April 20, 2018
April 12, 2018
April 5, 2018
March 28, 2018
March 26, 2018
March 21, 2018
March 15, 2018
March 8, 2018
February 21, 2018
February 5, 2018
January 30, 2018
January 24, 2018
This blog was also published by APNIC.
With so much traffic on the global internet day after day, it’s not always easy to spot the occasional irregularity. After all, there are numerous layers of complexity that go into the serving of webpages, with multiple companies, agencies and organizations each playing a role.
That’s why when something does catch our attention, it’s important that the various entities work together to explore the cause and, more importantly, try to identify whether it’s a malicious actor at work, a glitch in the process or maybe even something entirely intentional.
That’s what occurred last year when Internet Corporation for Assigned Names and Numbers staff and contractors were analyzing names in Domain Name System queries seen at the ICANN Managed Root Server, and the analysis program ran out of memory for one of their data files. After some investigating, they found the cause to be a very large number of mysterious queries for unique names such as f863zvv1xy2qf.surgery, bp639i-3nirf.hiphop, qo35jjk419gfm.net and yyif0aijr21gn.com.
While these were queries for names in existing top-level domains, the first label consisted of 12 or 13 random-looking characters. After ICANN shared their discovery with the other root server operators, Verisign took a closer look to help understand the situation.
One of the first things we noticed was that all of these mysterious queries were of type NS and came from one autonomous system network, AS 15169, assigned to Google LLC. Additionally, we confirmed that it was occurring consistently for numerous TLDs. (See Fig. 1)
Although this phenomenon was newly uncovered, analysis of historical data showed these traffic patterns actually began in late 2019. (See Fig. 2)
Perhaps the most interesting discovery, however, was that these specific query names were not also seen at the .com and .net name servers operated by Verisign. The data in Figure 3 shows the fraction of queried names that appear at A-root and J-root and also appear on the .com and .net name servers. For second-level labels of 12 and 13 characters, this fraction is essentially zero. The graphs also show that there appears to be queries for names with second-level label lengths of 10 and 11 characters, which are also absent from the TLD data.
The final mysterious aspect to this traffic is that it deviated from our normal expectation of caching. Remember that these are queries to a root name server, which returns a referral to the delegated name servers for a TLD. For example, when a root name server receives a query for yyif0aijr21gn.com, the response is a list of the name servers that are authoritative for the .com zone. The records in this response have a time to live of two days, meaning that the recursive name server can cache and reuse this data for that amount of time.
However, in this traffic we see queries for .com domain names from AS 15169 at the rate of about 30 million per day. (See Fig. 4) It is well known that Google Public DNS has thousands of backend servers and limits TTLs to a maximum of six hours. Assuming 4,000 backend servers each cached a .com referral for six hours, we might expect about 16,000 queries over a 24-hour period. The observed count is about 2,000 times higher by this back-of-the-envelope calculation.
From our initial analysis, it was unclear if these queries represented legitimate end-user activity, though we were confident that source IP address spoofing was not involved. However, since the query names shared some similarities to those used by botnets, we could not rule out malicious activity.
These findings were presented last year at the DNS-OARC 35a virtual meeting. In the conference chat room after the talk, the missing piece of this puzzle was mentioned by a conference participant. There is a Google webpage describing its public DNS service that talks about prepending nonce (i.e., random) labels for cache misses to increase entropy. In what came to be known as “the Kaminsky Attack,” an attacker can cause a recursive name server to emit queries for names chosen by the attacker. Prepending a nonce label adds unpredictability to the queries, making it very difficult to spoof a response. Note, however, that nonce prepending only works for queries where the reply is a referral.
In addition, Google DNS has implemented a form of query name minimization (see RFC 7816 and RFC 9156). As such, if a user requests the IP address of www.example.com and Google DNS decides this warrants a query to a root name server, it takes the name, strips all labels except for the TLD and then prepends a nonce string, resulting in something like u5vmt7xanb6rf.com. A root server’s response to that query is identical to one using the original query name.
Now, we are able to explain nearly all of the mysterious aspects of this query traffic from Google. We see random second-level labels because of the nonce strings that are designed to prevent spoofing. The 12- and 13-character-long labels are most likely the result of converting a 64-bit random value into an unpadded ASCII label with encoding similar to Base32. We don’t observe the same queries at TLD name servers because of both the nonce prepending and query name minimization. The query type is always NS because of query name minimization.
With that said, there’s still one aspect that eludes explanation: the high query rate (2000x for .com) and apparent lack of caching. And so, this aspect of the mystery continues.
Even though we haven’t fully closed the books on this case, one thing is certain: without the community’s teamwork to put the pieces of the puzzle together, explanations for this strange traffic may have remained unknown today. The case of the mysterious DNS root query traffic is a perfect example of the collaboration that’s required to navigate today’s ever-changing cyber environment. We’re grateful and humbled to be part of such a dedicated community that is intent on ensuring the security, stability and resiliency of the internet, and we look forward to more productive teamwork in the future.
The post More Mysterious DNS Root Query Traffic from a Large Cloud/DNS Operator appeared first on Verisign Blog.
Give employees the knowledge needed to spot the warning signs of a cyberattack and to understand when they may be putting sensitive data at risk
The post Cybersecurity awareness training: What is it and what works best? appeared first on WeLiveSecurity
AutoPWN Suite is a project for scanning vulnerabilities and exploiting systems automatically.
AutoPWN Suite uses nmap TCP-SYN scan to enumerate the host and detect the version of softwares running on it. After gathering enough information about the host, AutoPWN Suite automatically generates a list of "keywords" to search NIST vulnerability database.
Visit "PWN Spot!" for more information
AutoPWN Suite has a very user friendly easy to read output.
You can install it using pip. (sudo recommended)
sudo pip install autopwn-suite
OR
You can clone the repo.
git clone https://github.com/GamehunterKaan/AutoPWN-Suite.git
OR
You can download debian (deb) package from releases.
sudo apt-get install ./autopwn-suite_1.1.5.deb
Running with root privileges (sudo) is always recommended.
Automatic mode (This is the intended way of using AutoPWN Suite.)
autopwn-suite -y
Help Menu
$ autopwn-suite -h
usage: autopwn.py [-h] [-o OUTPUT] [-t TARGET] [-hf HOSTFILE] [-st {arp,ping}] [-nf NMAPFLAGS] [-s {0,1,2,3,4,5}] [-a API] [-y] [-m {evade,noise,normal}] [-nt TIMEOUT] [-c CONFIG] [-v]
AutoPWN Suite
options:
-h, --help show this help message and exit
-o OUTPUT, --output OUTPUT
Output file name. (Default : autopwn.log)
-t TARGET, --target TARGET
Target range to scan. This argument overwrites the hostfile argument. (192.168.0.1 or 192.168.0.0/24)
-hf HOSTFILE, --hostfile HOSTFILE
File containing a list of hosts to scan.
-st {arp,ping}, --scantype {arp,ping}
Scan type.
-nf NMAPFLAGS, --nmapflags NMAPFLAGS
Custom nmap flags to use for portscan. (Has to be specified like : -nf="-O")
-s {0,1,2,3,4, 5}, --speed {0,1,2,3,4,5}
Scan speed. (Default : 3)
-a API, --api API Specify API key for vulnerability detection for faster scanning. (Default : None)
-y, --yesplease Don't ask for anything. (Full automatic mode)
-m {evade,noise,normal}, --mode {evade,noise,normal}
Scan mode.
-nt TIMEOUT, --noisetimeout TIMEOUT
Noise mode timeout. (Default : None)
-c CONFIG, --config CONFIG
Specify a config file to use. (Default : None)
-v, --version Print version and exit.
pip install autopwn-suite
..deb
package for Debian based systems like Kali Linux and Parrot Security.ssh
, vnc
, ftp
etc.I would be glad if you are willing to contribute this project. I am looking forward to merge your pull request unless its something that is not needed or just a personal preference. Click here for more info!
You may not rent or lease, distribute, modify, sell or transfer the software to a third party. AutoPWN Suite is free for distribution, and modification with the condition that credit is provided to the creator and not used for commercial use. You may not use software for illegal or nefarious purposes. No liability for consequential damages to the maximum extent permitted by all applicable laws.
Having trouble using this tool? You can reach me out on discord, create an issue or create a discussion!
Technology is understandably viewed as a nuisance to be managed in pursuit of the health organizations’ primary mission
The post RSA – Digital healthcare meets security, but does it really want to? appeared first on WeLiveSecurity
API-based data transfer is so rapid, there’s but little time to stop very bad things happening quickly
The post RSA – APIs, your organization’s dedicated backdoors appeared first on WeLiveSecurity
Authored by Dexter Shin
McAfee’s Mobile Research Team introduced a new Android malware targeting Instagram users who want to increase their followers or likes in the last post. As we researched more about this threat, we found another malware type that uses different technical methods to steal user’s credentials. The target is users who are not satisfied with the default functions provided by Instagram. Various Instagram modification application already exists for those users on the Internet. The new malware we found pretends to be a popular mod app and steals Instagram credentials.
Instander is one of the famous Instagram modification applications available for Android devices to help Instagram users access extra helpful features. The mod app supports uploading high-quality images and downloading posted photos and videos.
The initial screens of this malware and Instander are similar, as shown below.
Figure 1. Instander legitimate app(Left) and Mmalware(Right)
Next, this malware requests an account (username or email) and password. Finally, this malware displays an error message regardless of whether the login information is correct.
Figure 2. Malware requests account and password
The malware steals the user’s username and password in a very unique way. The main trick is to use the Firebase API. First, the user input value is combined with l@gmail.com. This value and static password(=kamalw20051) are then sent via the Firebase API, createUserWithEmailAndPassword. And next, the password process is the same. After receiving the user’s account and password input, this malware will request it twice.
Since we cannot see the dashboard of the malware author, we tested it using the same API. As a result, we checked the user input value in plain text on the dashboard.
According to the Firebase document, createUserWithEmailAndPassword API is to create a new user account associated with the specified email address and password. Because the first parameter is defined as email patterns, the malware author uses the above code to create email patterns regardless of user input values.
It is an API for creating accounts in the Firebase so that the administrator can check the account name in the Firebase dashboard. The victim’s account and password have been requested as Firebase account name, so it should be seen as plain text without hashing or masking.
As an interesting point on the network traffic of the malware, this malware communicates with the Firebase server in Protobuf format in the network. The initial configuration of this Firebase API uses the JSON format. Although the Protobuf format is readable enough, it can be assumed that this malware author intentionally attempts to obfuscate the network traffic through the additional settings. Also, the domain used for data transfer(=www.googleapis.com) is managed by Google. Because it is a domain that is too common and not dangerous, many network filtering and firewall solutions do not detect it.
As mentioned, users should always be careful about installing 3rd party apps. Aside from the types of malware we’ve introduced so far, attackers are trying to steal users’ credentials in a variety of ways. Therefore, you should employ security software on your mobile devices and always keep up to date.
Fortunately, McAfee Mobile Security is able to detect this as Android/InstaStealer and protect you from similar threats. For more information visit McAfee Mobile Security
SHA256:
The post Instagram credentials Stealer: Disguised as Mod App appeared first on McAfee Blog.
To consumers, the Internet of Things might bring to mind a smart fridge that lets you know when to buy more eggs, or the ability to control your home’s lighting and temperature remotely through your phone.
But for cybersecurity professionals, internet-connected medical devices are more likely to be top-of-mind.
Not only is the Internet of Medical Things, or IoMT, surging — with the global market projected to reach $160 billion by 2027, according to Emergen Research — the stakes can be quite high, and sometimes even matters of life or death.
The risk to the individual patients is very small, experts caution, noting bad actors are far more likely to disrupt hospital operations, use unsecure devices to access other parts of the network or hold machines and data hostage for ransom.
“When people ask me, ’Should I be worried?’ I tell them no, and here’s why,” said Matthew Clapham, a veteran product cybersecurity specialist. “In the medical space, every single time I’ve probed areas that could potentially compromise patient safety, I’ve always been impressed with what I’ve found.”
That doesn’t mean the risk is zero, noted Christos Sarris, a longtime information security analyst. He shared an anecdote in Cisco Secure’s recent e-book, “Building Security Resilience,” about finding malware on an intensive care unit device that compromised a pump used to deliver precise doses of medicine.
Luckily, the threat, which was included in a vendor-provided patch, was caught during testing.
“The self-validation was fine,” Sarris said in a follow-up interview. “The vendor’s technicians signed off on it. So we only found this usual behavior because we tested the system for several days before returning it to use.”
But because such testing protocols take valuable equipment out of service and soak up the attention of often-stretched IT teams, they’re not the norm everywhere, he added.
Sarris and Clapham were among several security experts we spoke to for a deeper dive into the challenges of IoT medical device security and top-line strategies for protecting patients and hospitals.
Connected medical devices are becoming so integral to modern health care that a single hospital room might have 20 of them, Penn Medicine’s Dan Costantino noted in Healthcare IT News.
Sarris, who is currently an information security manager at Sainsbury’s, outlined some of the challenges this reality presents for hospital IT teams.
Health care IT teams are responsible for devices made by a multiplicity of vendors — including large, well-known brands, cheaper off-brand vendors, and small manufacturers of highly speciality instruments, he said. That’s a lot to keep up with, and teams don’t always have direct access to operating systems, patching and security testing, and instead are reliant on vendors to provide necessary updates and maintenance.
“Even today, you will rarely see proper security testing on these devices,” he said. “The biggest challenge is the environment. It’s not tens, it’s hundreds of devices. And each device is designed for a specific purpose. It has its own operating system, its own operational needs and so forth. So it’s very, very difficult — the IT teams can’t know everything.”
Cisco Advisory CISO Wolfgang Goerlich noted that one unique challenge for securing medical devices is that they often can’t be patched or replaced. Capital outlays are high and devices might be kept in service for a decade or more.
“So we effectively have a small window of time — which can be measured in hours or years, depending on how fortunate we are — where a device is not vulnerable to any known attacks,” he said. “And then, when they do become vulnerable, we have a long-tailed window of vulnerability.”
Or, as Clapham summed it up, “The bits are going to break down much faster than the iron.”
The Food and Drug Administration is taking the issue seriously, however, and actively working to improve how security risks are addressed throughout a device’s life cycle, as well as to mandate better disclosure of vulnerabilities when they are discovered.
“FDA seeks to require that devices have the capability to be updated and patched in a timely manner; that premarket submissions to FDA include evidence demonstrating the capability from a design and architecture perspective for device updating and patching… and that device firms publicly disclose when they learn of a cybersecurity vulnerability so users know when a device they use may be vulnerable and to provide direction to customers to reduce their risk,” Kevin Fu, acting director of medical device security at the FDA’s Center for Devices and Radiological Health explained to explained to MedTech Dive last year.
For hospitals and other health care providers, improving the security posture of connected devices boils down to a few key, and somewhat obvious, things: attention to network security, attention to other fundamentals like a zero-trust security framework more broadly, and investing in the necessary staffing and time do to the work right, Goerlich said.
“If everything is properly segmented, the risk of any of these devices being vulnerable and exploited goes way, way down,” he said. “But getting to that point is a journey.”
Sarris agrees, noting many hospitals have flat networks — that is, they reduce the cost and effort needed to administer them by keeping everything connected in a single domain or subdomain. Isolating these critical and potentially vulnerable devices from the rest of the network improves security, but increases the complexity and costs of oversight, including for things like providing remote access to vendors so they can provide support.
“It’s important to connect these devices into a network that’s specifically designed around the challenges they present,” Sarris said. “You may not have security control on the devices themselves, but you can have security controls around them. You can use micro segmentation, you can use network monitoring, et cetera. Some of these systems, they’re handling a lot of sensitive information and they don’t even support the encryption of data in transit — it can really be all over the place.”
The COVID-19 pandemic put a lot of financial pressure on health systems, Goerlich noted. During the virus’ peaks, many non-emergency procedures were delayed or canceled, hitting hospitals’ bottom lines pretty hard over several years. This put even greater pressure on already strained cybersecurity budgets at a time of increasing needs.
“Again, devices have time as a security property,” Goerlich said, “which means we’ve got two years of vulnerabilities that may not have been addressed. And which also probably means we’re going to try to push the lifecycle of that equipment out and try to maintain it for two more years.”
Clapham, who previously served as director of cybersecurity for software and the cloud at GE Healthcare, said device manufacturers are working hard to ensure new devices are as secure as they can be when they’re first rolled out and when new features are added through software updates.
“When you’re adding new functionality that might need to talk to a central service somewhere, either locally or in the cloud, that could have implications for security — so that’s where we go in and do our due diligence,” he said.
The revolution that needs to happen is one of mindset, Clapham said. Companies are waking up to the new reality of not just making a well-functioning device that has to last for over a decade, but of making a software suite to support the device that will need to be updated and have new features added over that long lifespan.
This should include adding additional headroom and flexibility in the hardware, he said. While it adds to costs on the front end, it will add longevity as software is updated over time. (Imagine the computer you bought in 2007 trying to run the operating system you have now.)
“Ultimately, customers should expect a secure device, but they should also expect to pay for the additional overhead it will take to make sure that device stays secure over time,” he said. “And manufacturers need to plan for upgradability and the ability to swap out components with minimal downtime.”
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels
Educating employees about how to spot phishing attacks can strike a much-needed blow for network defenders
The post Phishing awareness training: Help your employees avoid the hook appeared first on WeLiveSecurity
The past few months have been chockfull of conversations with security customers, partners, and industry leaders. After two years of virtual engagements, in-person events like our CISO Forum and Cisco Live as well as the industry’s RSA Conference underscore the power of face-to-face interactions. It’s a reminder of just how enriching conversations are and how incredibly interconnected the world is. And it’s only made closer by the security experiences that impact us all.
I had the pleasure of engaging with some of the industry’s best and brightest, sharing ideas, insights, and what keeps us up at night. The conversations offered more than an opportunity to reconnect and put faces with names. It was a chance to discuss some of the most critical cybersecurity issues and implications that are top of mind for organizations.
The collective sentiments are clear. The need for better security has never been so strong. Securing the future is good business. Disruptions are happening faster than ever before, making our interconnected world more unpredictable. Hybrid work is here to stay, hybrid and complex architectures will continue to be a reality for most organizations and that has dramatically expanded the threat surface. More and more businesses are operating as ecosystems—attacks have profound ripple effects across value chains. Attacks are becoming more bespoke, government-sponsored threat actors and ransomware as a service, continue to unravel challenging businesses to minimize the time from initial breach to complete compromise, in the event of a compromise.
Regardless of where organizations are on their digital transformations, they are progressively embarking upon journeys to unify networking and secure connectivity needs. Mobility, BYOD (bring your own device), cloud, increased collaboration, and the consumerization of IT have necessitated a new type of access control security–zero trust security. Supporting a modern enterprise across a distributed network and infrastructure involves the ability to validate user IDs, continuously verify authentication and device trust, and protect every application—
without compromising user experience. Zero trust offers organizations a simpler approach to securing access for everyone, from any device, anywhere—all the while, making it harder for attackers.
Simplicity continues to be a hot topic, and in the context of its functionality. In addition to a frictionless user experience, the real value to customers is improving operational challenges. Security practitioners want an easier way to secure the edge, access, and operations—including threat intelligence and response. Key to this simplified experience is connecting and managing business-critical control points and vulnerabilities, exchanging data, and contextualizing threat intelligence. And it requires a smarter ecosystem that brings together capabilities, unifying admin, policy, visibility, and control. Simplicity that works hard and smart—and enhances their security posture. The ultimate simplicity is improved efficacy for the organization.
Insider cyber-attacks are among the fastest growing threats in the modern security network, an increasingly common cause of data breaches. Using their authorized access, employees are intentionally or inadvertently causing harm by stealing, exposing, or destroying sensitive company data. Regardless, the consequences are the same—costing companies big bucks and massive disruption. It’s also one of the reasons why “identity as the new perimeter” is trending, as the primary objective of all advanced attacks is to gain privileged credentials. Insider attack attempts are not slowing down. However, advanced telemetry, threat detection and protection, and continuous trusted access all help decelerate the trend. Organizations are better able to expose suspicious or malicious activities caused by insider threats. Innovations are enabling business to analyze all network traffic and historical patterns of employee access and determine whether to let an employee continue uninterrupted or prompt to authenticate again.
Supply chain attacks have become one of the biggest security worries for businesses. Not only are disruptions debilitating, but no one knew the impacts or perceived outcomes. Attackers are highly aware that supply chains are comprised of larger entities often tightly connected to a broad array of smaller and less cyber-savvy organizations. Lured by lucrative payouts, attackers seek the weakest supply chain link for a successful breach. In fact, two of the four biggest cyber-attacks that the Cisco Talos team saw in the field last year were supply chain attacks that deployed ransomware on their targets’ networks: SolarWinds and REvil’s attack exploiting the Kaseya managed service provider. While there’s no perfect way to absolutely protect from ransomware, businesses are taking steps to bolster their defenses and protect against disaster.
Security incidents targeting personal information are on the rise. In fact, 86 percent of global consumers were victims of identity theft, credit/debit card fraud, or a data breach in 2020. In a recent engagement discovered by the Cisco Talos team, the API on a customer’s website could have been exploited by an attacker to steal sensitive personal information. The good news is governments and businesses alike are leaning into Data Privacy and Protection, adhering to global regulations that enforce high standards for collecting, using, disclosing, storing, securing, accessing, transferring, and processing personal data. Within the past year, the U.S. government implemented new rules to ensure companies and federal agencies follow required cybersecurity standards. As long as cyber criminals continue seeking to breach our privacy and data, these rules help hold us accountable.
Through all the insightful discussions with customers, partners, and industry leaders, a theme emerged. When it comes to cybersecurity, preparation is key and the cost of being wrong is extraordinary. By acknowledging there will continue to be disruptions, business can prepare for whatever comes next. And when it comes, they’ll not only weather the storm, but they will also come out of it stronger. And the good news is that Cisco Security Business Group is already on the journey actively addressing these headlines, and empowering our customers to reach their full potential, securely.
We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!
Cisco Secure Social Channels
One in five organizations have teetered on the brink of insolvency after a cyberattack. Can your company keep hackers at bay?
The post Cyberattacks: A very real existential threat to organizations appeared first on WeLiveSecurity