FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

(Cyber) Risk = Probability of Occurrence x Damage

Here’s How to Enhance Your Cyber Resilience with CVSS In late 2023, the Common Vulnerability Scoring System (CVSS) v4.0 was unveiled, succeeding the eight-year-old CVSS v3.0, with the aim to enhance vulnerability assessment for both industry and the public. This latest version introduces additional metrics like safety and automation to address criticism of lacking granularity while

6 Mistakes Organizations Make When Deploying Advanced Authentication

Deploying advanced authentication measures is key to helping organizations address their weakest cybersecurity link: their human users. Having some form of 2-factor authentication in place is a great start, but many organizations may not yet be in that spot or have the needed level of authentication sophistication to adequately safeguard organizational data. When deploying

Bitcoin Forensic Analysis Uncovers Money Laundering Clusters and Criminal Proceeds

A forensic analysis of a graph dataset containing transactions on the Bitcoin blockchain has revealed clusters associated with illicit activity and money laundering, including detecting criminal proceeds sent to a crypto exchange and previously unknown wallets belonging to a Russian darknet market. The findings come from Elliptic in collaboration with researchers from the&

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

The U.S. government has unveiled new security guidelines aimed at bolstering critical infrastructure against artificial intelligence (AI)-related threats. "These guidelines are informed by the whole-of-government effort to assess AI risks across all sixteen critical infrastructure sectors, and address threats both to and from, and involving AI systems," the Department of Homeland Security (DHS)&

Google Prevented 2.28 Million Malicious Apps from Reaching Play Store in 2023

Google on Monday revealed that almost 200,000 app submissions to its Play Store for Android were either rejected or remediated to address issues with access to sensitive data such as location or SMS messages over the past year. The tech giant also said it blocked 333,000 bad accounts from the app storefront in 2023 for attempting to distribute malware or for repeated policy violations. "In 2023,

AI Copilot: Launching Innovation Rockets, But Beware of the Darkness Ahead

Imagine a world where the software that powers your favorite apps, secures your online transactions, and keeps your digital life could be outsmarted and taken over by a cleverly disguised piece of code. This isn't a plot from the latest cyber-thriller; it's actually been a reality for years now. How this will change – in a positive or negative direction – as artificial intelligence (AI) takes on

PyPI Halts Sign-Ups Amid Surge of Malicious Package Uploads Targeting Developers

The maintainers of the Python Package Index (PyPI) repository briefly suspended new user sign-ups following an influx of malicious projects uploaded as part of a typosquatting campaign. PyPI said "new project creation and new user registration" was temporarily halted to mitigate what it said was a "malware upload campaign." The incident was resolved 10 hours later, on March 28, 2024, at 12:56

GitHub Launches AI-Powered Autofix Tool to Assist Devs in Patching Security Flaws

GitHub on Wednesday announced that it's making available a feature called code scanning autofix in public beta for all Advanced Security customers to provide targeted recommendations in an effort to avoid introducing new security issues. "Powered by GitHub Copilot and CodeQL, code scanning autofix covers more than 90% of alert types in JavaScript, Typescript, Java, and

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'

New Hugging Face Vulnerability Exposes AI Models to Supply Chain Attacks

Cybersecurity researchers have found that it's possible to compromise the Hugging Face Safetensors conversion service to ultimately hijack the models submitted by users and result in supply chain attacks. "It's possible to send malicious pull requests with attacker-controlled data from the Hugging Face service to any repository on the platform, as well as hijack any models that are submitted

Three Tips to Protect Your Secrets from AI Accidents

Last year, the Open Worldwide Application Security Project (OWASP) published multiple versions of the "OWASP Top 10 For Large Language Models," reaching a 1.0 document in August and a 1.1 document in October. These documents not only demonstrate the rapidly evolving nature of Large Language Models, but the evolving ways in which they can be attacked and defended. We're going to talk in this

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team

How to Achieve the Best Risk-Based Alerting (Bye-Bye SIEM)

Did you know that Network Detection and Response (NDR) has become the most effective technology to detect cyber threats? In contrast to SIEM, NDR offers adaptive cybersecurity with reduced false alerts and efficient threat response. Are you aware of Network Detection and Response (NDR) and how it’s become the most effective technology to detect cyber threats?  NDR massively

Google Open Sources Magika: AI-Powered File Identification Tool

Google has announced that it's open-sourcing Magika, an artificial intelligence (AI)-powered tool to identify file types, to help defenders accurately detect binary and textual file types. "Magika outperforms conventional file identification methods providing an overall 30% accuracy boost and up to 95% higher precision on traditionally hard to identify, but potentially problematic content

Riding the AI Waves: The Rise of Artificial Intelligence to Combat Cyber Threats

In nearly every segment of our lives, AI (artificial intelligence) now makes a significant impact: It can deliver better healthcare diagnoses and treatments; detect and reduce the risk of financial fraud; improve inventory management; and serve up the right recommendation for a streaming movie on Friday night. However, one can also make a strong case that some of AI’s most significant impacts

TensorFlow CI/CD Flaw Exposed Supply Chain to Poisoning Attacks

Continuous integration and continuous delivery (CI/CD) misconfigurations discovered in the open-source TensorFlow machine learning framework could have been exploited to orchestrate supply chain attacks. The misconfigurations could be abused by an attacker to "conduct a supply chain compromise of TensorFlow releases on GitHub and PyPi by compromising TensorFlow's build agents via

This Free Discovery Tool Finds and Mitigates AI-SaaS Risks

Wing Security announced today that it now offers free discovery and a paid tier for automated control over thousands of AI and AI-powered SaaS applications. This will allow companies to better protect their intellectual property (IP) and data against the growing and evolving risks of AI usage. SaaS applications seem to be multiplying by the day, and so does their integration of AI

Getting off the Attack Surface Hamster Wheel: Identity Can Help

IT professionals have developed a sophisticated understanding of the enterprise attack surface – what it is, how to quantify it and how to manage it.  The process is simple: begin by thoroughly assessing the attack surface, encompassing the entire IT environment. Identify all potential entry and exit points where unauthorized access could occur. Strengthen these vulnerable points using

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. “These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to

Google Unveils RETVec - Gmail's New Defense Against Spam and Malicious Emails

Google has revealed a new multilingual text vectorizer called RETVec (short for Resilient and Efficient Text Vectorizer) to help detect potentially harmful content such as spam and malicious emails in Gmail. "RETVec is trained to be resilient against character-level manipulations including insertion, deletion, typos, homoglyphs, LEET substitution, and more," according to the&

AI & Your Family: The Wows and Potential Risks

By: McAfee

When we come across the term Artificial Intelligence (AI), our mind often ventures into the realm of sci-fi movies like I, Robot, Matrix, and Ex Machina. We’ve always perceived AI as a futuristic concept, something that’s happening in a galaxy far, far away. However, AI is not only here in our present but has also been a part of our lives for several years in the form of various technological devices and applications.

In our day-to-day lives, we use AI in many instances without even realizing it. AI has permeated into our homes, our workplaces, and is at our fingertips through our smartphones. From cell phones with built-in smart assistants to home assistants that carry out voice commands, from social networks that determine what content we see to music apps that curate playlists based on our preferences, AI has its footprints everywhere. Therefore, it’s integral to not only embrace the wows of this impressive technology but also understand and discuss the potential risks associated with it.

Dig Deeper: Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam

AI in Daily Life: A Blend of Convenience and Intrusion

AI, a term that might sound intimidating to many, is not so when we understand it. It is essentially technology that can be programmed to achieve certain goals without assistance. In simple words, it’s a computer’s ability to predict, process data, evaluate it, and take necessary action. This smart way of performing tasks is being implemented in education, business, manufacturing, retail, transportation, and almost every other industry and cultural sector you can think of.

AI has been doing a lot of good too. For instance, Instagram, the second most popular social network, is now deploying AI technology to detect and combat cyberbullying in both comments and photos. No doubt, AI is having a significant impact on everyday life and is poised to metamorphose the future landscape. However, alongside its benefits, AI has brought forward a set of new challenges and risks. From self-driving cars malfunctioning to potential jobs lost to AI robots, from fake videos and images to privacy breaches, the concerns are real and need timely discussions and preventive measures.

Navigating the Wows and Risks of AI

AI has made it easier for people to face-swap within images and videos, leading to “deep fake” videos that appear remarkably realistic and often go viral. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. While this displays the power of AI technology, it also brings to light the responsibility and critical thinking required when consuming and sharing online content.

Dig Deeper: The Future of Technology: AI, Deepfake, & Connected Devices

Yet another concern raised by AI is privacy breaches. The Cambridge Analytica/Facebook scandal of 2018, alleged to have used AI technology unethically to collect Facebook user data, serves as a reminder that our private (and public) information can be exploited for financial or political gain. Thus, it becomes crucial to discuss and take necessary steps like locking down privacy settings on social networks and being mindful of the information shared in the public feed, including reactions and comments on other content.

McAfee Pro Tip: Cybercriminals employ advanced methods to deceive individuals, propagating sensationalized fake news, creating deceptive catfish dating profiles, and orchestrating harmful impersonations. Recognizing sophisticated AI-generated content can pose a challenge, but certain indicators may signal that you’re encountering a dubious image or interacting with a perpetrator operating behind an AI-generated profile. Know the indicators. 

AI and Cybercrime

With the advent of AI, cybercrime has found a new ally. As per McAfee’s Threats Prediction Report, AI technology might enable hackers to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activities. Moreover, AI-generated phishing emails are scamming people into unknowingly handing over sensitive data.

Dig Deeper: How to Keep Your Data Safe From the Latest Phishing Scam

Bogus emails are becoming highly personalized and can trick intelligent users into clicking malicious links. Given the sophistication of these AI-related scams, it is vital to constantly remind ourselves and our families to be cautious with every click, even those from known sources. The need to be alert and informed cannot be overstressed, especially in times when AI and cybercrime often seem to be two sides of the same coin.

IoT Security Concerns in an AI-Powered World

As homes evolve to be smarter and synced with AI-powered Internet of Things (IoT) products, potential threats have proliferated. These threats are not limited to computers and smartphones but extend to AI-enabled devices such as voice-activated assistants. According to McAfee’s Threat Prediction Report, these IoT devices are particularly susceptible as points of entry for cybercriminals. Other devices at risk, as highlighted by security experts, include routers, and tablets.

This means we need to secure all our connected devices and home internet at its source – the network. Routers provided by your ISP (Internet Security Provider) are often less secure, so consider purchasing your own. As a primary step, ensure that all your devices are updated regularly. More importantly, change the default password on these devices and secure your primary network along with your guest network with strong passwords.

How to Discuss AI with Your Family

Having an open dialogue about AI and its implications is key to navigating through the intricacies of this technology. Parents need to have open discussions with kids about the positives and negatives of AI technology. When discussing fake videos and images, emphasize the importance of critical thinking before sharing any content online. Possibly, even introduce them to the desktop application FakeApp, which allows users to swap faces within images and videos seamlessly, leading to the production of deep fake photos and videos. These can appear remarkably realistic and often go viral.

Privacy is another critical area for discussion. After the Cambridge Analytica/Facebook scandal of 2018, the conversation about privacy breaches has become more significant. These incidents remind us how our private (and public) information can be misused for financial or political gain. Locking down privacy settings, being mindful of the information shared, and understanding the implications of reactions and comments are all topics worth discussing. 

Being Proactive Against AI-Enabled Cybercrime

Awareness and knowledge are the best tools against AI-enabled cybercrime. Making families understand that bogus emails can now be highly personalized and can trick even the most tech-savvy users into clicking malicious links is essential. AI can generate phishing emails, scamming people into handing over sensitive data. In this context, constant reminders to be cautious with every click, even those from known sources, are necessary.

Dig Deeper: Malicious Websites – The Web is a Dangerous Place

The advent of AI has also likely allowed hackers to bypass security measures on networks undetected, leading to data breaches, malware attacks, and ransomware. Therefore, being alert and informed is more than just a precaution – it is a vital safety measure in the digital age.

Final Thoughts

Artificial Intelligence has indeed woven itself into our everyday lives, making things more convenient, efficient, and connected. However, with these advancements come potential risks and challenges. From privacy breaches, and fake content, to AI-enabled cybercrime, the concerns are real and need our full attention. By understanding AI better, having open discussions, and taking appropriate security measures, we can leverage this technology’s immense potential without falling prey to its risks. In our AI-driven world, being informed, aware, and proactive is the key to staying safe and secure.

To safeguard and fortify your online identity, we strongly recommend that you delve into the extensive array of protective features offered by McAfee+. This comprehensive cybersecurity solution is designed to provide you with a robust defense against a wide spectrum of digital threats, ranging from malware and phishing attacks to data breaches and identity theft.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blog.

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S.

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and generative AI is added to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various AI-based security offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

By: THN
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an 

New AMBERSQUID Cryptojacking Operation Targets Uncommon AWS Services

By: THN
A novel cloud-native cryptojacking operation has set its eyes on uncommon Amazon Web Services (AWS) offerings such as AWS Amplify, AWS Fargate, and Amazon SageMaker to illicitly mine cryptocurrency. The malicious cyber activity has been codenamed AMBERSQUID by cloud and container security firm Sysdig. "The AMBERSQUID operation was able to exploit cloud services without triggering the AWS

Everything You Wanted to Know About AI Security but Were Afraid to Ask

There’s been a great deal of AI hype recently, but that doesn’t mean the robots are here to replace us. This article sets the record straight and explains how businesses should approach AI. From musing about self-driving cars to fearing AI bots that could destroy the world, there has been a great deal of AI hype in the past few years. AI has captured our imaginations, dreams, and occasionally,

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

In today's digital landscape, your business data is more than just numbers—it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic,

The Vulnerability of Zero Trust: Lessons from the Storm 0558 Hack

While IT security managers in companies and public administrations rely on the concept of Zero Trust, APTS (Advanced Persistent Threats) are putting its practical effectiveness to the test. Analysts, on the other hand, understand that Zero Trust can only be achieved with comprehensive insight into one's own network.  Just recently, an attack believed to be perpetrated by the Chinese hacker group

Unveiling the Unseen: Identifying Data Exfiltration with Machine Learning

Why Data Exfiltration Detection is Paramount? The world is witnessing an exponential rise in ransomware and data theft employed to extort companies. At the same time, the industry faces numerous critical vulnerabilities in database software and company websites. This evolution paints a dire picture of data exposure and exfiltration that every security leader and team is grappling with. This

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in

Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders

Microsoft on Tuesday unveiled Security Copilot in limited preview, marking its continued quest to embed AI-oriented features in an attempt to offer "end-to-end defense at machine speed and scale." Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a security analysis tool that enables cybersecurity analysts to quickly respond to threats, process signals,

The Future of Network Security: Predictive Analytics and ML-Driven Solutions

As the digital age evolves and continues to shape the business landscape, corporate networks have become increasingly complex and distributed. The amount of data a company collects to detect malicious behaviour constantly increases, making it challenging to detect deceptive and unknown attack patterns and the so-called "needle in the haystack". With a growing number of cybersecurity threats,

PyTorch Machine Learning Framework Compromised with Malicious Dependency

The maintainers of the PyTorch package have warned users who have installed the nightly builds of the library between December 25, 2022, and December 30, 2022, to uninstall and download the latest versions following a dependency confusion attack. "PyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package

Shennina - Automating Host Exploitation With AI


Shennina is an automated host exploitation framework. The mission of the project is to fully automate the scanning, vulnerability scanning/analysis, and exploitation using Artificial Intelligence. Shennina is integrated with Metasploit and Nmap for performing the attacks, as well as being integrated with an in-house Command-and-Control Server for exfiltrating data from compromised machines automatically.

This was developed by Mazin Ahmed and Khalid Farah within the HITB CyberWeek 2019 AI challenge. The project is developed based on the concept of DeepExploit by Isao Takaesu.

Shennina scans a set of input targets for available network services, uses its AI engine to identify recommended exploits for the attacks, and then attempts to test and attack the targets. If the attack succeeds, Shennina proceeds with the post-exploitation phase.

The AI engine is initially trained against live targets to learn reliable exploits against remote services.

Shennina also supports a "Heuristics" mode for identfying recommended exploits.

The documentation can be found in the Docs directory within the project.


Features

  • Automated self-learning approach for finding exploits.
  • High performance using managed concurrency design.
  • Intelligent exploits clustering.
  • Post exploitation capabilities.
  • Deception detection.
  • Ransomware simulation capabilities.
  • Automated data exfiltration.
  • Vulnerability scanning mode.
  • Heuristic mode support for recommending exploits.
  • Windows, Linux, and macOS support for agents.
  • Scriptable attack method within the post-exploitation phase.
  • Exploits suggestions for Kernel exploits.
  • Out-of-Band technique testing for exploitation checks.
  • Automated exfiltration of important data on compromised servers.
  • Reporting capabilities.
  • Coverage for 40+ TTPs within the MITRE ATT&CK Framework.
  • Supports multi-input targets.

Why are we solving this problem with AI?

The problem should be solved by a hash tree without using "AI", however, the HITB Cyber Week AI Challenge required the project to find ways to solve it through AI.

Note

This project is a security experiment.

Legal Disclaimer

This project is made for educational and ethical testing purposes only. Usage of Shennina for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program.

Authors



Move over Patch Tuesday – it’s Ada Lovelace Day!

Hacking on actual computers is one thing, but hacking purposefully on imaginary computers is, these days, something we can only imagine.

Our Responsible Approach to Governing Artificial Intelligence

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.


Chief Information Officers and other technology decision makers continuously seek new and better ways to evaluate and manage their investments in innovation – especially the technologies that may create consequential decisions that impact human rights. As Artificial Intelligence (AI) becomes more prominent in vendor offerings, there is an increasing need to identify, manage, and mitigate the unique risks that AI-based technologies may bring.

Cisco is committed to maintaining a responsible, fair, and reflective approach to the governance, implementation, and use of AI technologies in our solutions. The Cisco Responsible AI initiative maximizes the potential benefits of AI while mitigating bias or inappropriate use of these technologies.

Gartner® Research recently published “Innovation Insight for Bias Detection/Mitigation, Explainable AI and Interpretable AI,” offering guidance on the best ways to incorporate AI-based solutions that facilitates “understanding, trust and performance accountability required by stakeholders.” This newsletter describes Cisco’s approach to Responsible AI governance and features this Gartner report.

Gartner - Introducing Cisco Responsible AI - August 2022

At Cisco, we are committed to managing AI development in a way that augments our focus on security, privacy, and human rights. The Cisco Responsible AI initiative and framework governs the application of responsible AI controls in our product development lifecycle, how we manage incidents that arise, engage externally, and its use across Cisco’s solutions, services, and enterprise operations.

Our Responsible AI framework comprises:

  • Guidance and Oversight by a committee of senior executives across Cisco businesses, engineering, and operations to drive adoption and guide leaders and developers on issues, technologies, processes, and practices related to AI
  • Lightweight Controls implemented within Cisco’s Secure Development Lifecycle compliance framework, including unique AI requirements
  • Incident Management that extends Cisco’s existing Incident Response system with a small team that reviews, responds, and works with engineering to resolve AI-related incidents
  • Industry Leadership to proactively engage, monitor, and influence industry associations and related bodies for emerging Responsible AI standards
  • External Engagement with governments to understand global perspectives on AI’s benefits and risks, and monitor, analyze, and influence legislation, emerging policy, and regulations affecting AI in all Cisco markets.

We base our Responsible AI initiative on principles consistent with Cisco’s operating practices and directly applicable to the governance of AI innovation. These principles—Transparency, Fairness, Accountability, Privacy, Security, and Reliability—are used to upskill our development teams to map to controls in the Cisco Secure Development Lifecycle and embed Security by Design, Privacy by Design, and Human Rights by Design in our solutions. And our principle-based approach empowers customers to take part in a continuous feedback cycle that informs our development process.

We strive to meet the highest standards of these principles when developing, deploying, and operating AI-based solutions to respect human rights, encourage innovation, and serve Cisco’s purpose to power an inclusive future for all.

Check out Gartner recommendations for integrating AI into an organization’s data systems in this Newsletter and learn more about Cisco’s approach to Responsible Innovation by reading our introduction “Transparency Is Key: Introducing Cisco Responsible AI.”


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

Sophos 2022 Threat Report: Malware, Mobile, Machine learning and more!

The crooks have shown that they're willing to learn and adapt their attacks, so we need to make sure we learn and adapt, too.

❌