FreshRSS

πŸ”’
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Unlocking the Privacy Advantage to Build Trust in the Age of AI

The Cisco 2025 Data Privacy Benchmark Study offers insights into the evolving privacy landscape and privacy’s critical role in an AI-centric world.

Trust Through Transparency: Regulation’s Role in Consumer Confidence

The Cisco 2024 Consumer Privacy Survey highlights awareness and attitudes regarding personal data, legislation, Gen AI and data localization requirements.

CensysGPT: AI-Powered Threat Hunting for Cybersecurity Pros (Webinar)

Artificial intelligence (AI) is transforming cybersecurity, and those leading the charge are using it to outsmart increasingly advanced cyber threats. Join us for an exciting webinar, "The Future of Threat Hunting is Powered by Generative AI," where you'll explore how AI tools are shaping the future of cybersecurity defenses. During the session, Censys Security Researcher Aidan Holland will

N. Korea-linked Kimsuky Shifts to Compiled HTML Help Files in Ongoing Cyberattacks

The North Korea-linked threat actor known as Kimsuky (aka Black Banshee, Emerald Sleet, or Springtail) has been observed shifting its tactics, leveraging Compiled HTML Help (CHM) files as vectors to deliver malware for harvesting sensitive data. Kimsuky, active since at least 2012, is known to target entities located in South Korea as well as North America, Asia, and Europe. According

U.S. Sanctions Russians Behind 'Doppelganger' Cyber Influence Campaign

The U.S. Treasury Department's Office of Foreign Assets Control (OFAC) on Wednesday announced sanctions against two 46-year-old Russian nationals and the respective companies they own for engaging in cyber influence operations. Ilya Andreevich Gambashidze (Gambashidze), the founder of the Moscow-based company Social Design Agency (SDA), and Nikolai Aleksandrovich Tupikin (Tupikin), the CEO and

Generative AI Security - Secure Your Business in a World Powered by LLMs

Did you know that 79% of organizations are already leveraging Generative AI technologies? Much like the internet defined the 90s and the cloud revolutionized the 2010s, we are now in the era of Large Language Models (LLMs) and Generative AI. The potential of Generative AI is immense, yet it brings significant challenges, especially in security integration. Despite their powerful capabilities,

From Deepfakes to Malware: AI's Expanding Role in Cyber Attacks

Large language models (LLMs) powering artificial intelligence (AI) tools today could be exploited to develop self-augmenting malware capable of bypassing YARA rules. "Generative AI can be used to evade string-based YARA rules by augmenting the source code of small malware variants, effectively lowering detection rates," Recorded Future said in a new report shared with The Hacker News.

Over 100 Malicious AI/ML Models Found on Hugging Face Platform

As many as 100 malicious artificial intelligence (AI)/machine learning (ML) models have been discovered in the Hugging Face platform. These include instances where loading a pickle file leads to code execution, software supply chain security firm JFrog said. "The model's payload grants the attacker a shell on the compromised machine, enabling them to gain full control over victims'

Microsoft Releases PyRIT - A Red Teaming Tool for Generative AI

Microsoft has released an open access automation framework called PyRIT (short for Python Risk Identification Tool) to proactively identify risks in generative artificial intelligence (AI) systems. The red teaming tool is designed to "enable every organization across the globe to innovate responsibly with the latest artificial intelligence advances," Ram Shankar Siva Kumar, AI red team
❌