FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Before yesterdayYour RSS feeds

Our Responsible Approach to Governing Artificial Intelligence

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.


Chief Information Officers and other technology decision makers continuously seek new and better ways to evaluate and manage their investments in innovation – especially the technologies that may create consequential decisions that impact human rights. As Artificial Intelligence (AI) becomes more prominent in vendor offerings, there is an increasing need to identify, manage, and mitigate the unique risks that AI-based technologies may bring.

Cisco is committed to maintaining a responsible, fair, and reflective approach to the governance, implementation, and use of AI technologies in our solutions. The Cisco Responsible AI initiative maximizes the potential benefits of AI while mitigating bias or inappropriate use of these technologies.

Gartner® Research recently published “Innovation Insight for Bias Detection/Mitigation, Explainable AI and Interpretable AI,” offering guidance on the best ways to incorporate AI-based solutions that facilitates “understanding, trust and performance accountability required by stakeholders.” This newsletter describes Cisco’s approach to Responsible AI governance and features this Gartner report.

Gartner - Introducing Cisco Responsible AI - August 2022

At Cisco, we are committed to managing AI development in a way that augments our focus on security, privacy, and human rights. The Cisco Responsible AI initiative and framework governs the application of responsible AI controls in our product development lifecycle, how we manage incidents that arise, engage externally, and its use across Cisco’s solutions, services, and enterprise operations.

Our Responsible AI framework comprises:

  • Guidance and Oversight by a committee of senior executives across Cisco businesses, engineering, and operations to drive adoption and guide leaders and developers on issues, technologies, processes, and practices related to AI
  • Lightweight Controls implemented within Cisco’s Secure Development Lifecycle compliance framework, including unique AI requirements
  • Incident Management that extends Cisco’s existing Incident Response system with a small team that reviews, responds, and works with engineering to resolve AI-related incidents
  • Industry Leadership to proactively engage, monitor, and influence industry associations and related bodies for emerging Responsible AI standards
  • External Engagement with governments to understand global perspectives on AI’s benefits and risks, and monitor, analyze, and influence legislation, emerging policy, and regulations affecting AI in all Cisco markets.

We base our Responsible AI initiative on principles consistent with Cisco’s operating practices and directly applicable to the governance of AI innovation. These principles—Transparency, Fairness, Accountability, Privacy, Security, and Reliability—are used to upskill our development teams to map to controls in the Cisco Secure Development Lifecycle and embed Security by Design, Privacy by Design, and Human Rights by Design in our solutions. And our principle-based approach empowers customers to take part in a continuous feedback cycle that informs our development process.

We strive to meet the highest standards of these principles when developing, deploying, and operating AI-based solutions to respect human rights, encourage innovation, and serve Cisco’s purpose to power an inclusive future for all.

Check out Gartner recommendations for integrating AI into an organization’s data systems in this Newsletter and learn more about Cisco’s approach to Responsible Innovation by reading our introduction “Transparency Is Key: Introducing Cisco Responsible AI.”


We’d love to hear what you think. Ask a Question, Comment Below, and Stay Connected with Cisco Secure on social!

Cisco Secure Social Channels

Instagram
Facebook
Twitter
LinkedIn

How to Use AI in Cybersecurity and Avoid Being Trapped

The use of AI in cybersecurity is growing rapidly and is having a significant impact on threat detection, incident response, fraud detection, and vulnerability management. According to a report by Juniper Research, the use of AI for fraud detection and prevention is expected to save businesses $11 billion annually by 2023. But how to integrate AI into business cybersecurity infrastructure

The US Air Force Is Moving Fast on AI-Piloted Fighter Jets

After successful autonomous flight tests in December, the military is ramping up its plans to bring artificial intelligence to the skies.

Fake ChatGPT Chrome Extension Hijacking Facebook Accounts for Malicious Advertising

A fake ChatGPT-branded Chrome browser extension has been found to come with capabilities to hijack Facebook accounts and create rogue admin accounts, highlighting one of the different methods cyber criminals are using to distribute malware. "By hijacking high-profile Facebook business accounts, the threat actor creates an elite army of Facebook bots and a malicious paid media apparatus," Guardio

Fake ChatGPT Chrome Browser Extension Caught Hijacking Facebook Accounts

Google has stepped in to remove a bogus Chrome browser extension from the official Web Store that masqueraded as OpenAI's ChatGPT service to harvest Facebook session cookies and hijack the accounts. The "ChatGPT For Google" extension, a trojanized version of a legitimate open source browser add-on, attracted over 9,000 installations since March 14, 2023, prior to its removal. It was originally

OpenAI Reveals Redis Bug Behind ChatGPT User Data Exposure Incident

OpenAI on Friday disclosed that a bug in the Redis open source library was responsible for the exposure of other users' personal information and chat titles in the upstart's ChatGPT service earlier this week. The glitch, which came to light on March 20, 2023, enabled certain users to view brief descriptions of other users' conversations from the chat history sidebar, prompting the company to

Breaking the Mold: Pen Testing Solutions That Challenge the Status Quo

Malicious actors are constantly adapting their tactics, techniques, and procedures (TTPs) to adapt to political, technological, and regulatory changes quickly. A few emerging threats that organizations of all sizes should be aware of include the following: Increased use of Artificial Intelligence and Machine Learning: Malicious actors are increasingly leveraging AI and machine learning to

Microsoft's ‘Security Copilot’ Sics ChatGPT on Security Breaches

The new tool aims to deliver the network insights and coordination that “AI” security systems have long promised.

Microsoft Introduces GPT-4 AI-Powered Security Copilot Tool to Empower Defenders

Microsoft on Tuesday unveiled Security Copilot in limited preview, marking its continued quest to embed AI-oriented features in an attempt to offer "end-to-end defense at machine speed and scale." Powered by OpenAI's GPT-4 generative AI and its own security-specific model, it's billed as a security analysis tool that enables cybersecurity analysts to quickly respond to threats, process signals,

Italian Watchdog Bans OpenAI's ChatGPT Over Data Protection Concerns

The Italian data protection watchdog, Garante per la Protezione dei Dati Personali (aka Garante), has imposed a temporary ban of OpenAI's ChatGPT service in the country, citing data protection concerns. To that end, it has ordered the company to stop processing users' data with immediate effect, stating it intends to investigate the company over whether it's unlawfully processing such data in

The Hacking of ChatGPT Is Just Getting Started

Security researchers are jailbreaking large language models to get around safety rules. Things could get much worse.

How ChatGPT—and Bots Like It—Can Spread Malware

Generative AI is a tool, which means it can be used by cybercriminals, too. Here’s how to protect yourself.

Google Cloud Introduces Security AI Workbench for Faster Threat Detection and Analysis

Google's cloud division is following in the footsteps of Microsoft with the launch of Security AI Workbench that leverages generative AI models to gain better visibility into the threat landscape.  Powering the cybersecurity suite is Sec-PaLM, a specialized large language model (LLM) that's "fine-tuned for security use cases." The idea is to take advantage of the latest advances in AI to augment

Brace Yourself for the 2024 Deepfake Election

No matter what happens with generative AI, its disruptive forces are already beginning to play a role in the fast-approaching US presidential race.

Why Your Detection-First Security Approach Isn't Working

Stopping new and evasive threats is one of the greatest challenges in cybersecurity. This is among the biggest reasons why attacks increased dramatically in the past year yet again, despite the estimated $172 billion spent on global cybersecurity in 2022. Armed with cloud-based tools and backed by sophisticated affiliate networks, threat actors can develop new and evasive malware more quickly

ChatGPT is Back in Italy After Addressing Data Privacy Concerns

OpenAI, the company behind ChatGPT, has officially made a return to Italy after the company met the data protection authority's demands ahead of April 30, 2023, deadline. The development was first reported by the Associated Press. OpenAI's CEO, Sam Altman, tweeted, "we're excited ChatGPT is available in [Italy] again!" The reinstatement comes following Garante's decision to temporarily block 

How To Delete Your Data From ChatGPT

OpenAI has new tools that give you more control over your information—although they may not go far enough.

Searching for AI Tools? Watch Out for Rogue Sites Distributing RedLine Malware

Malicious Google Search ads for generative AI services like OpenAI ChatGPT and Midjourney are being used to direct users to sketchy websites as part of a BATLOADER campaign designed to deliver RedLine Stealer malware. "Both AI services are extremely popular but lack first-party standalone apps (i.e., users interface with ChatGPT via their web interface while Midjourney uses Discord)," eSentire

How AI Protects (and Attacks) Your Inbox

Criminals may use artificial intelligence to scam you. Companies, like Google, are looking for ways AI and machine learning can help prevent phishing.

The Dangers of Artificial Intelligence

By: McAfee

Over the decades, Hollywood has depicted artificial intelligence (AI) in multiple unsettling ways. In their futuristic settings, the AI begins to think for itself, outsmarts the humans, and overthrows society. The resulting dark world is left in a constant barrage of storms – metaphorically and meteorologically. (It’s always so gloomy and rainy in those movies.) 

AI has been a part of manufacturing, shipping, and other industries for several years now. But the emergence of mainstream AI in daily life is stirring debates about its use. Content, art, video, and voice generation tools can make you write like Shakespeare, look like Tom Cruise, or create digital masterpieces in the style of Van Gogh. While it starts out as fun and games, an overreliance or misuse of AI can quickly turn shortcuts into irresponsibly cut corners and pranks into malicious impersonations.   

It’s imperative that everyone interact responsibly with mainstream AI tools like ChatGPT, Bard, Craiyon, and Voice.ai, among others, to avoid these three real dangers of AI that you’re most likely to encounter. 

1. AI Hallucinations

The cool thing about AI is it has advanced to the point where it does think for itself. It’s constantly learning and forming new patterns. The more questions you ask it, the more data it collects and the “smarter” it gets. However, when you ask ChatGPT a question it doesn’t know the answer to, it doesn’t admit that it doesn’t know. Instead, it’ll make up an answer like a precocious schoolchild. This phenomenon is known as an AI hallucination. 

One prime example of an AI hallucination occurred in a New York courtroom. A lawyer presented a lengthy brief that cited multiple law cases to back his point. It turns out the lawyer used ChatGPT to write the entire brief and he didn’t fact check the AI’s work. ChatGPT fabricated its supporting citations, none of which existed. 

AI hallucinations could become a threat to society in that it could populate the internet with false information. Researchers and writers have a duty to thoroughly doublecheck any work they outsource to text generation tools like ChatGPT. When a trustworthy online source publishes content and asserts it as the unbiased truth, readers should be able to trust that the publisher isn’t leading them astray. 

2. Deepfake, AI Art, and Fake News

We all know that you can’t trust everything you read on the internet. Deepfake and AI-generated art deepen the mistrust. Now, you can’t trust everything you see on the internet. 

Deepfake is the digital manipulation of a photo or video to portray an event that never happened or portray a person doing or saying something they never did or said. AI art creates new images using a compilation of published works on the internet to fulfill the prompt. 

Deepfake and AI art become a danger to the public when people use them to supplement fake news reports. Individuals and organizations who feel strongly about their side of an issue may shunt integrity to the side to win new followers to their cause. Fake news is often incendiary and in extreme cases can cause unrest.  

Before you share a “news” article with your social media following or shout about it to others, do some additional research to ensure its accuracy. Additionally, scrutinize the video or image accompanying the story. A deepfake gives itself away when facial expressions or hand gestures don’t look quite right. Also, the face may distort if the hands get too close to it. To spot AI art, think carefully about the context. Is it too fantastic or terrible to be true? Check out the shadows, shading, and the background setting for anomalies. 

3. AI Voice Scams

An emerging dangerous use of AI is cropping up in AI voice scams. Phishers have attempted to get people’s personal details and gain financially over the phone for decades. But now with the help of AI voice tools, their scams are entering a whole new dimension of believability.  

With as little as three seconds of genuine audio, AI voice generators can mimic someone’s voice with up to 95% accuracy. While AI voice generators may add some humor to a comedy deepfake video, criminals are using the technology to seriously frighten people and scam them out of money at the same time. The criminal will impersonate someone using their voice and call the real person’s loved one, saying they’ve been robbed or sustained an accident. McAfee’s Beware the Artificial Imposter report discovered that 77% of people targeted by an AI voice scam lost money as a result. Seven percent of people lost as much as $5,000 to $15,000. 

Use AI Responsibly 

Google’s code of conduct states “Don’t be evil.”2 Because AI relies on input from humans, we have the power to make AI as benevolent or as malevolent as we are. There’s a certain amount of trust involved in the engineers who hold the future of the technology – and if Hollywood is to be believed, the fate of humanity – in their deft hands and brilliant minds. 

“60 Minutes” likened AI’s influence on society on a tier with fire, agriculture, and electricity.3 Because AI never has to take a break, it can learn and teach itself new things every second of every day. It’s advancing quickly and some of the written and visual art it creates can result in some touching expressions of humanity. But AI doesn’t quite understand the emotion it portrays. It’s simply a game of making patterns. Is AI – especially its use in creative pursuits – dimming the spark of humanity? That remains to be seen. 

When used responsibly and in moderation in daily life, it may make us more efficient and inspire us to think in new ways. Be on the lookout for the dangers of AI and use this amazing technology for good. 

1The New York Times, “Here’s What Happens When Your Lawyer Uses ChatGPT 

2Alphabet, “Google Code of Conduct”  

360 Minutes, “Artificial Intelligence Revolution 

The post The Dangers of Artificial Intelligence appeared first on McAfee Blog.

Over 100,000 Stolen ChatGPT Account Credentials Sold on Dark Web Marketplaces

Over 101,100 compromised OpenAI ChatGPT account credentials have found their way on illicit dark web marketplaces between June 2022 and May 2023, with India alone accounting for 12,632 stolen credentials. The credentials were discovered within information stealer logs made available for sale on the cybercrime underground, Group-IB said in a report shared with The Hacker News. "The number of

How Generative AI Can Dupe SaaS Authentication Protocols — And Effective Ways To Prevent Other Key AI Risks in SaaS

Security and IT teams are routinely forced to adopt software before fully understanding the security risks. And AI tools are no exception. Employees and business leaders alike are flocking to generative AI software and similar programs, often unaware of the major SaaS security vulnerabilities they're introducing into the enterprise. A February 2023 generative AI survey of 1,000 executives 

The Right Way to Enhance CTI with AI (Hint: It's the Data)

Cyber threat intelligence is an effective weapon in the ongoing battle to protect digital assets and infrastructure - especially when combined with AI. But AI is only as good as the data feeding it. Access to unique, underground sources is key. Threat Intelligence offers tremendous value to people and companies. At the same time, its ability to address organizations' cybersecurity needs and the

3 Reasons SaaS Security is the Imperative First Step to Ensuring Secure AI Usage

In today's fast-paced digital landscape, the widespread adoption of AI (Artificial Intelligence) tools is transforming the way organizations operate. From chatbots to generative AI models, these SaaS-based applications offer numerous benefits, from enhanced productivity to improved decision-making. Employees using AI tools experience the advantages of quick answers and accurate results, enabling

The Risks and Preventions of AI in Business: Safeguarding Against Potential Pitfalls

Artificial intelligence (AI) holds immense potential for optimizing internal processes within businesses. However, it also comes with legitimate concerns regarding unauthorized use, including data loss risks and legal consequences. In this article, we will explore the risks associated with AI implementation and discuss measures to minimize damages. Additionally, we will examine regulatory

WormGPT: New AI Tool Allows Cybercriminals to Launch Sophisticated Cyber Attacks

By: THN
With generative artificial intelligence (AI) becoming all the rage these days, it's perhaps not surprising that the technology has been repurposed by malicious actors to their own advantage, enabling avenues for accelerated cybercrime. According to findings from SlashNext, a new generative AI cybercrime tool called WormGPT has been advertised on underground forums as a way for adversaries to

New AI Tool 'FraudGPT' Emerges, Tailored for Sophisticated Attacks

By: THN
Following the footsteps of WormGPT, threat actors are advertising yet another cybercrime generative artificial intelligence (AI) tool dubbed FraudGPT on various dark web marketplaces and Telegram channels. "This is an AI bot, exclusively targeted for offensive purposes, such as crafting spear phishing emails, creating cracking tools, carding, etc.," Netenrich security researcher Rakesh Krishnan 

A New Attack Impacts ChatGPT—and No One Knows How to Stop It

Researchers found a simple way to make ChatGPT, Bard, and other chatbots misbehave, proving that AI is hard to tame.

The Senate’s AI Future Is Haunted by the Ghost of Privacy Past

The US Congress is trying to tame the rapid rise of artificial intelligence. But senators’ failure to tackle privacy reform is making the task a nightmare.

Microsoft’s AI Red Team Has Already Made the Case for Itself

Since 2018, a dedicated team within Microsoft has attacked machine learning systems to make them safer. But with the public release of new generative AI tools, the field is already evolving.

Learn How Your Business Data Can Amplify Your AI/ML Threat Detection Capabilities

In today's digital landscape, your business data is more than just numbers—it's a powerhouse. Imagine leveraging this data not only for profit but also for enhanced AI and Machine Learning (ML) threat detection. For companies like Comcast, this isn't a dream. It's reality. Your business comprehends its risks, vulnerabilities, and the unique environment in which it operates. No generic,

Everything You Wanted to Know About AI Security but Were Afraid to Ask

There’s been a great deal of AI hype recently, but that doesn’t mean the robots are here to replace us. This article sets the record straight and explains how businesses should approach AI. From musing about self-driving cars to fearing AI bots that could destroy the world, there has been a great deal of AI hype in the past few years. AI has captured our imaginations, dreams, and occasionally,

AI Chatbots Are Invading Your Local Government—and Making Everyone Nervous

State and local governments in the US are scrambling to harness tools like ChatGPT to unburden their bureaucracies, rushing to write their own rules—and avoid generative AI's many pitfalls.

The US Congress Has Trust Issues. Generative AI Is Making It Worse

Senators are meeting with Silicon Valley's elite to learn how to deal with AI. But can Congress tackle the rapidly emerging tech before working on itself?

Live Webinar: Overcoming Generative AI Data Leakage Risks

As the adoption of generative AI tools, like ChatGPT, continues to surge, so does the risk of data exposure. According to Gartner’s "Emerging Tech: Top 4 Security Risks of GenAI" report, privacy and data security is one of the four major emerging risks within generative AI. A new webinar featuring a multi-time Fortune 100 CISO and the CEO of LayerX, a browser extension solution, delves into this

Get Yourself AI-powered Scam Protection That Spots and Block Scams in Real Time

By: McAfee

The tables have turned. Now you can use AI to spot and block scam texts before they do you harm. 

You might have heard how scammers have tapped into the power of AI. It provides them with powerful tools to create convincing-looking scams on a massive scale, which can flood your phone with annoying and malicious texts. 

The good news is that we use AI too. And we have for some time to keep you safe. Now, we’ve put AI to use in another powerful way—to put an end to scam texts on your phone. 

Our new McAfee Scam Protection™ automatically identifies and alerts you if it detects a dangerous URL in your texts. No more wondering if a package delivery message or bank notification is real or not. Our patented AI technology instantaneously detects malicious links to stop you before you click by sending an alert. And as a second line of defense, it can block risky sites if you accidentally follow a scam link in a text, email, social media, and more. 

Stop scam texts and their malicious links.  

The time couldn’t be more right for this kind of protection. Last year, Americans lost $330 million to text scams alone, more than double the previous year, with an average reported loss of $1,000, according to the Federal Trade Commission. The deluge of these new sophisticated AI-generated scams is making it harder than ever to tell what’s real from what’s fake.  

Which is where our use of AI comes in. With it, you can turn the table on scammers and their AI tools.  

Here’s a closer look at how McAfee Scam Protection™ works: 

  • Proactive and automatic protection: Get notifications about a scam text before you even open the message. After you grant permission to scan the URLs in your texts, McAfee Scam Protection takes charge and will let you know which texts aren’t safe and shouldn’t be opened. 
  • Patented and powerful AI: McAfee’s AI runs in real-time and is constantly analyzing and processing millions of malicious links from around the world to provide better detection. This means McAfee Scam Protection can protect you from advanced threats including new zero-day threats that haven’t been seen before. McAfee’s AI continually gets smarter to stay ahead of cybercriminals to protect you even better. 
  • Simple and easy to use: When you’re set up, McAfee Scam Protection goes to work immediately. No copying or pasting or checking whether a text or email is a scam. We do the work for you and the feature will alert you if it detects a dangerous link and blocks risky sites in real time if you accidentally click.   

How do I get McAfee Scam Protection™? 

McAfee Scam Protection is free for most existing customers, and free to try for new customers. 

Most McAfee customers now have McAfee Scam Protection available. Simply update your app. There’s no need to purchase or download anything separately. Set up McAfee Scam Protection in your mobile app, then enable Safe Browsing for extra protection or download our web protection extension for your PC or Mac from the McAfee Protection Center. Some exclusions apply¹. 

For new customers, McAfee Scam Protection is available as part of a free seven-day trial of McAfee Mobile Security. After the trial period, McAfee Mobile Security is $2.99 a month or $29.99 annually for a one-year subscription. 

As part of our new Scam Protection, you can benefit from McAfee’s risky link identification on any platform you use. It can block dangerous links should you accidentally click on one, whether that’s through texts, emails, social media, or a browser. It’s powered by AI as well, and you’ll get it by setting up Safe Browsing on your iOS² or Android device—and by using the WebAdvisor extension on PCs, Macs and iOS. 

Scan the QR code to download McAfee Scam Protection™ from the Google App store

 Yes, the tables have turned on scammers. 

AI works in your favor. Just as it has for some time now if you’ve used McAfee for your online protection. McAfee Scam Protection takes it to a new level. As scammers use AI to create increasingly sophisticated attacks, McAfee Scam Protection can help you tell what’s real and what’s fake. 

 


  1. Customers currently with McAfee+, McAfee Total Protection, McAfee LiveSafe, and McAfee Mobile Security plans have McAfee Scam Protection™ included in their subscription.
  2. Scam text filtering is coming to iOS devices in October.  

The post Get Yourself AI-powered Scam Protection That Spots and Block Scams in Real Time appeared first on McAfee Blog.

Your Boss’s Spyware Could Train AI to Replace You

Corporations are using software to monitor employees on a large scale. Some experts fear the data these tools collect could be used to automate people out of their jobs.

Webinar — AI vs. AI: Harnessing AI Defenses Against AI-Powered Risks

Generative AI is a double-edged sword, if there ever was one. There is broad agreement that tools like ChatGPT are unleashing waves of productivity across the business, from IT, to customer experience, to engineering. That's on the one hand.  On the other end of this fencing match: risk. From IP leakage and data privacy risks to the empowering of cybercriminals with AI tools, generative AI

Microsoft's AI-Powered Bing Chat Ads May Lead Users to Malware-Distributing Sites

By: THN
Malicious ads served inside Microsoft Bing's artificial intelligence (AI) chatbot are being used to distribute malware when searching for popular tools. The findings come from Malwarebytes, which revealed that unsuspecting users can be tricked into visiting booby-trapped sites and installing malware directly from Bing Chat conversations. Introduced by Microsoft in February 2023, Bing Chat is an 

Webinar: How vCISOs Can Navigating the Complex World of AI and LLM Security

In today's rapidly evolving technological landscape, the integration of Artificial Intelligence (AI) and Large Language Models (LLMs) has become ubiquitous across various industries. This wave of innovation promises improved efficiency and performance, but lurking beneath the surface are complex vulnerabilities and unforeseen risks that demand immediate attention from cybersecurity professionals

How to Guard Your Data from Exposure in ChatGPT

ChatGPT has transformed the way businesses generate textual content, which can potentially result in a quantum leap in productivity. However, Generative AI innovation also introduces a new dimension of data exposure risk, when employees inadvertently type or paste sensitive business data into ChatGPT, or similar applications. DLP solutions, the go-to solution for similar challenges, are

Exploring the Realm of Malicious Generative AI: A New Digital Security Challenge

Recently, the cybersecurity landscape has been confronted with a daunting new reality – the rise of malicious Generative AI, like FraudGPT and WormGPT. These rogue creations, lurking in the dark corners of the internet, pose a distinctive threat to the world of digital security. In this article, we will look at the nature of Generative AI fraud, analyze the messaging surrounding these creations,

The AI-Generated Child Abuse Nightmare Is Here

Thousands of child abuse images are being created with AI. New images of old victims are appearing, as criminals trade datasets.

Google Expands Its Bug Bounty Program to Tackle Artificial Intelligence Threats

Google has announced that it's expanding its Vulnerability Rewards Program (VRP) to compensate researchers for finding attack scenarios tailored to generative artificial intelligence (AI) systems in an effort to bolster AI safety and security. "Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or

The UN Hired an AI Company to Untangle the Israeli-Palestinian Crisis

CulturePulse's AI model promises to create a realistic virtual simulation of every Israeli and Palestinian citizen. But don't roll your eyes: It's already been put to the test in other conflict zones.

Predictive AI in Cybersecurity: Outcomes Demonstrate All AI is Not Created Equally

Here is what matters most when it comes to artificial intelligence (AI) in cybersecurity: Outcomes.  As the threat landscape evolves and generative AI is added to the toolsets available to defenders and attackers alike, evaluating the relative effectiveness of various AI-based security offerings is increasingly important — and difficult. Asking the right questions can help you spot solutions

Offensive and Defensive AI: Let’s Chat(GPT) About It

ChatGPT: Productivity tool, great for writing poems, and… a security risk?! In this article, we show how threat actors can exploit ChatGPT, but also how defenders can use it for leveling up their game. ChatGPT is the most swiftly growing consumer application to date. The extremely popular generative AI chatbot has the ability to generate human-like, coherent and contextually relevant responses.

Guide: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks

Download the free guide, "It's a Generative AI World: How vCISOs, MSPs and MSSPs Can Keep their Customers Safe from Gen AI Risks." ChatGPT now boasts anywhere from 1.5 to 2 billion visits per month. Countless sales, marketing, HR, IT executive, technical support, operations, finance and other functions are feeding data prompts and queries into generative AI engines. They use these tools to write

Social Media Sleuths, Armed With AI, Are Identifying Dead Bodies

Poverty, fentanyl, and lack of public funding mean morgues are overloaded with unidentified bodies. TikTok and Facebook pages are filling the gap—with AI proving a powerful and controversial new tool.

U.S., U.K., and Global Partners Release Secure AI System Development Guidelines

The U.K. and U.S., along with international partners from 16 other countries, have released new guidelines for the development of secure artificial intelligence (AI) systems. "The approach prioritizes ownership of security outcomes for customers, embraces radical transparency and accountability, and establishes organizational structures where secure design is a top priority," the U.S.

AI & Your Family: The Wows and Potential Risks

By: McAfee

When we come across the term Artificial Intelligence (AI), our mind often ventures into the realm of sci-fi movies like I, Robot, Matrix, and Ex Machina. We’ve always perceived AI as a futuristic concept, something that’s happening in a galaxy far, far away. However, AI is not only here in our present but has also been a part of our lives for several years in the form of various technological devices and applications.

In our day-to-day lives, we use AI in many instances without even realizing it. AI has permeated into our homes, our workplaces, and is at our fingertips through our smartphones. From cell phones with built-in smart assistants to home assistants that carry out voice commands, from social networks that determine what content we see to music apps that curate playlists based on our preferences, AI has its footprints everywhere. Therefore, it’s integral to not only embrace the wows of this impressive technology but also understand and discuss the potential risks associated with it.

Dig Deeper: Artificial Imposters—Cybercriminals Turn to AI Voice Cloning for a New Breed of Scam

AI in Daily Life: A Blend of Convenience and Intrusion

AI, a term that might sound intimidating to many, is not so when we understand it. It is essentially technology that can be programmed to achieve certain goals without assistance. In simple words, it’s a computer’s ability to predict, process data, evaluate it, and take necessary action. This smart way of performing tasks is being implemented in education, business, manufacturing, retail, transportation, and almost every other industry and cultural sector you can think of.

AI has been doing a lot of good too. For instance, Instagram, the second most popular social network, is now deploying AI technology to detect and combat cyberbullying in both comments and photos. No doubt, AI is having a significant impact on everyday life and is poised to metamorphose the future landscape. However, alongside its benefits, AI has brought forward a set of new challenges and risks. From self-driving cars malfunctioning to potential jobs lost to AI robots, from fake videos and images to privacy breaches, the concerns are real and need timely discussions and preventive measures.

Navigating the Wows and Risks of AI

AI has made it easier for people to face-swap within images and videos, leading to “deep fake” videos that appear remarkably realistic and often go viral. A desktop application called FakeApp allows users to seamlessly swap faces and share fake videos and images. While this displays the power of AI technology, it also brings to light the responsibility and critical thinking required when consuming and sharing online content.

Dig Deeper: The Future of Technology: AI, Deepfake, & Connected Devices

Yet another concern raised by AI is privacy breaches. The Cambridge Analytica/Facebook scandal of 2018, alleged to have used AI technology unethically to collect Facebook user data, serves as a reminder that our private (and public) information can be exploited for financial or political gain. Thus, it becomes crucial to discuss and take necessary steps like locking down privacy settings on social networks and being mindful of the information shared in the public feed, including reactions and comments on other content.

McAfee Pro Tip: Cybercriminals employ advanced methods to deceive individuals, propagating sensationalized fake news, creating deceptive catfish dating profiles, and orchestrating harmful impersonations. Recognizing sophisticated AI-generated content can pose a challenge, but certain indicators may signal that you’re encountering a dubious image or interacting with a perpetrator operating behind an AI-generated profile. Know the indicators. 

AI and Cybercrime

With the advent of AI, cybercrime has found a new ally. As per McAfee’s Threats Prediction Report, AI technology might enable hackers to bypass security measures on networks undetected. This can lead to data breaches, malware attacks, ransomware, and other criminal activities. Moreover, AI-generated phishing emails are scamming people into unknowingly handing over sensitive data.

Dig Deeper: How to Keep Your Data Safe From the Latest Phishing Scam

Bogus emails are becoming highly personalized and can trick intelligent users into clicking malicious links. Given the sophistication of these AI-related scams, it is vital to constantly remind ourselves and our families to be cautious with every click, even those from known sources. The need to be alert and informed cannot be overstressed, especially in times when AI and cybercrime often seem to be two sides of the same coin.

IoT Security Concerns in an AI-Powered World

As homes evolve to be smarter and synced with AI-powered Internet of Things (IoT) products, potential threats have proliferated. These threats are not limited to computers and smartphones but extend to AI-enabled devices such as voice-activated assistants. According to McAfee’s Threat Prediction Report, these IoT devices are particularly susceptible as points of entry for cybercriminals. Other devices at risk, as highlighted by security experts, include routers, and tablets.

This means we need to secure all our connected devices and home internet at its source – the network. Routers provided by your ISP (Internet Security Provider) are often less secure, so consider purchasing your own. As a primary step, ensure that all your devices are updated regularly. More importantly, change the default password on these devices and secure your primary network along with your guest network with strong passwords.

How to Discuss AI with Your Family

Having an open dialogue about AI and its implications is key to navigating through the intricacies of this technology. Parents need to have open discussions with kids about the positives and negatives of AI technology. When discussing fake videos and images, emphasize the importance of critical thinking before sharing any content online. Possibly, even introduce them to the desktop application FakeApp, which allows users to swap faces within images and videos seamlessly, leading to the production of deep fake photos and videos. These can appear remarkably realistic and often go viral.

Privacy is another critical area for discussion. After the Cambridge Analytica/Facebook scandal of 2018, the conversation about privacy breaches has become more significant. These incidents remind us how our private (and public) information can be misused for financial or political gain. Locking down privacy settings, being mindful of the information shared, and understanding the implications of reactions and comments are all topics worth discussing. 

Being Proactive Against AI-Enabled Cybercrime

Awareness and knowledge are the best tools against AI-enabled cybercrime. Making families understand that bogus emails can now be highly personalized and can trick even the most tech-savvy users into clicking malicious links is essential. AI can generate phishing emails, scamming people into handing over sensitive data. In this context, constant reminders to be cautious with every click, even those from known sources, are necessary.

Dig Deeper: Malicious Websites – The Web is a Dangerous Place

The advent of AI has also likely allowed hackers to bypass security measures on networks undetected, leading to data breaches, malware attacks, and ransomware. Therefore, being alert and informed is more than just a precaution – it is a vital safety measure in the digital age.

Final Thoughts

Artificial Intelligence has indeed woven itself into our everyday lives, making things more convenient, efficient, and connected. However, with these advancements come potential risks and challenges. From privacy breaches, and fake content, to AI-enabled cybercrime, the concerns are real and need our full attention. By understanding AI better, having open discussions, and taking appropriate security measures, we can leverage this technology’s immense potential without falling prey to its risks. In our AI-driven world, being informed, aware, and proactive is the key to staying safe and secure.

To safeguard and fortify your online identity, we strongly recommend that you delve into the extensive array of protective features offered by McAfee+. This comprehensive cybersecurity solution is designed to provide you with a robust defense against a wide spectrum of digital threats, ranging from malware and phishing attacks to data breaches and identity theft.

The post AI & Your Family: The Wows and Potential Risks appeared first on McAfee Blog.

OpenAI’s Custom Chatbots Are Leaking Their Secrets

Released earlier this month, OpenAI’s GPTs let anyone create custom chatbots. But some of the data they’re built on is easily exposed.

Anduril’s New Drone Killer Is Locked on to AI-Powered Warfare

Autonomous drones are rapidly changing combat. Anduril’s new one aims to gain an edge with jet power and AI.

A New Trick Uses AI to Jailbreak AI Models—Including GPT-4

Adversarial algorithms can systematically probe large language models like OpenAI’s GPT-4 for weaknesses that can make them misbehave.

Russia's AI-Powered Disinformation Operation Targeting Ukraine, U.S., and Germany

The Russia-linked influence operation called Doppelganger has targeted Ukrainian, U.S., and German audiences through a combination of inauthentic news sites and social media accounts. These campaigns are designed to amplify content designed to undermine Ukraine as well as propagate anti-LGBTQ+ sentiment, U.S. military competence, and Germany's economic and social issues, according to a new

The Most Dangerous People on the Internet in 2023

From Sam Altman and Elon Musk to ransomware gangs and state-backed hackers, these are the individuals and groups that spent this year disrupting the world we know it.

NIST Warns of Security and Privacy Risks from Rapid AI System Deployment

The U.S. National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges that arise as a result of increased deployment of artificial intelligence (AI) systems in recent years. “These security and privacy challenges include the potential for adversarial manipulation of training data, adversarial exploitation of model vulnerabilities to
❌