FreshRSS

🔒
❌ Secure Planet Training Courses Updated For 2019 - Click Here
There are new available articles, click to refresh the page.
Today — April 23rd 2024Your RSS feeds

U.S. Imposes Visa Restrictions on 13 Linked to Commercial Spyware Misuse

The U.S. Department of State on Monday said it's taking steps to impose visa restrictions on 13 individuals who are allegedly involved in the development and sale of commercial spyware or who are immediately family members of those involved in such businesses. "These individuals have facilitated or derived financial benefit from the misuse of this technology, which

Misconfigured cloud server leaked clues of North Korean animation scam

Outsourcers outsourced work for the BBC, Amazon, and HBO Max to the hermit kingdom

A misconfigured cloud server that used a North Korean IP address has led to the discovery that film production studios including the BBC, Amazon, and HBO Max could be inadvertently hiring workers from the hermit kingdom for animation projects.…

Russia's APT28 Exploited Windows Print Spooler Flaw to Deploy 'GooseEgg' Malware

The Russia-linked nation-state threat actor tracked as APT28 weaponized a security flaw in the Microsoft Windows Print Spooler component to deliver a previously unknown custom malware called GooseEgg. The post-compromise tool, which is said to have been used since at least June 2020 and possibly as early as April 2019, leveraged a now-patched flaw that allowed for

Weekly Update 396

Weekly Update 396

"More Data Breaches Than You Can Shake a Stick At". That seems like a reasonable summary and I suggest there are two main reasons for this observation. Firstly, there are simply loads of breaches happening and you know this already because, well, you read my stuff! Secondly, There are a couple of Twitter accounts in particular that are taking incidents that appear across a combination of a popular clear web hacking forum and various dark web ransomware websites and "raising them to the surface", so to speak. That is incidents that may have previously remained on the fringe are being regularly positioned in the spotlight where they have much greater visibility. The end result is greater awareness and a longer backlog of breaches to process than I've ever had before!

Weekly Update 396
Weekly Update 396
Weekly Update 396
Weekly Update 396

References

  1. Sponsored by: Report URI: Guarding you from rogue JavaScript! Don’t get pwned; get real-time alerts & prevent breaches #SecureYourSite
  2. Le Slip Français was breached by "shopifyGUY" (I wonder where all these Shopify API keys are coming from?!)
  3. Roku got hit with a pretty sizeable credential stuffing attack (looks like they're now mandating multi-step auth for everyone, which is certainly one way of tackling this)
  4. There's an extraordinary rate of new breaches appearing at the moment (that's a link to the HackManac Twitter account that's been very good at reporting on these)

Change Healthcare Finally Admits It Paid Ransomware Hackers—and Still Faces a Patient Data Leak

The company belatedly conceded both that it had paid the cybercriminals extorting it and that patient data nonetheless ended up on the dark web.

Old Windows print spooler bug is latest target of Russia's Fancy Bear gang

Putin's pals use 'GooseEgg' malware to launch attacks you can defeat with patches or deletion

Russian spies are exploiting a years-old Windows print spooler vulnerability and using a custom tool called GooseEgg to elevate privileges and steal credentials across compromised networks, according to Microsoft Threat Intelligence.…

Yesterday — April 22nd 2024Your RSS feeds

FBI and friends get two more years of warrantless FISA Section 702 snooping

Senate kills reform amendments, Biden swiftly signs bill into law

US lawmakers on Saturday reauthorized a contentious warrantless surveillance tool for another two years — and added a whole bunch of people and organizations to the list of those who can be compelled to spy for Uncle Sam.…

Russian FSB Counterintelligence Chief Gets 9 Years in Cybercrime Bribery Scheme

The head of counterintelligence for a division of the Russian Federal Security Service (FSB) was sentenced last week to nine years in a penal colony for accepting a USD $1.7 million bribe to ignore the activities of a prolific Russian cybercrime group that hacked thousands of e-commerce websites. The protection scheme was exposed in 2022 when Russian authorities arrested six members of the group, which sold millions of stolen payment cards at flashy online shops like Trump’s Dumps.

A now-defunct carding shop that sold stolen credit cards and invoked 45’s likeness and name.

As reported by The Record, a Russian court last week sentenced former FSB officer Grigory Tsaregorodtsev for taking a $1.7 million bribe from a cybercriminal group that was seeking a “roof,” a well-placed, corrupt law enforcement official who could be counted on to both disregard their illegal hacking activities and run interference with authorities in the event of their arrest.

Tsaregorodtsev was head of the counterintelligence department for a division of the FSB based in Perm, Russia. In February 2022, Russian authorities arrested six men in the Perm region accused of selling stolen payment card data. They also seized multiple carding shops run by the gang, including Ferum Shop, Sky-Fraud, and Trump’s Dumps, a popular fraud store that invoked the 45th president’s likeness and promised to “make credit card fraud great again.”

All of the domains seized in that raid were registered by an IT consulting company in Perm called Get-net LLC, which was owned in part by Artem Zaitsev — one of the six men arrested. Zaitsev reportedly was a well-known programmer whose company supplied services and leasing to the local FSB field office.

The message for Trump’s Dumps users left behind by Russian authorities that seized the domain in 2022.

Russian news sites report that Internal Affairs officials with the FSB grew suspicious when Tsaregorodtsev became a little too interested in the case following the hacking group’s arrests. The former FSB agent had reportedly assured the hackers he could have their case transferred and that they would soon be free.

But when that promised freedom didn’t materialize, four the of the defendants pulled the walls down on the scheme and brought down their own roof. The FSB arrested Tsaregorodtsev, and seized $154,000 in cash, 100 gold bars, real estate and expensive cars.

At Tsaregorodtsev’s trial, his lawyers argued that their client wasn’t guilty of bribery per se, but that he did admit to fraud because he was ultimately unable to fully perform the services for which he’d been hired.

The Russian news outlet Kommersant reports that all four of those who cooperated were released with probation or correctional labor. Zaitsev received a sentence of 3.5 years in prison, and defendant Alexander Kovalev got four years.

In 2017, KrebsOnSecurity profiled Trump’s Dumps, and found the contact address listed on the site was tied to an email address used to register more than a dozen domains that were made to look like legitimate Javascript calls many e-commerce sites routinely make to process transactions — such as “js-link[dot]su,” “js-stat[dot]su,” and “js-mod[dot]su.”

Searching on those malicious domains revealed a 2016 report from RiskIQ, which shows the domains featured prominently in a series of hacking campaigns against e-commerce websites. According to RiskIQ, the attacks targeted online stores running outdated and unpatched versions of shopping cart software from Magento, Powerfront and OpenCart.

Those shopping cart flaws allowed the crooks to install “web skimmers,” malicious Javascript used to steal credit card details and other information from payment forms on the checkout pages of vulnerable e-commerce sites. The stolen customer payment card details were then sold on sites like Trump’s Dumps and Sky-Fraud.

The Next US President Will Have Troubling New Surveillance Powers

Over the weekend, President Joe Biden signed legislation not only reauthorizing a major FISA spy program but expanding it in ways that could have major implications for privacy rights in the US.

Europol now latest cops to beg Big Tech to ditch E2EE

Don't bore us, get to the chorus: You need less privacy so we can protect the children

Yet another international cop shop has come out swinging against end-to-end encryption - this time it's Europol which is urging an end to implementation of the tech for fear police investigations will be hampered by protected DMs.…

Tinder's 'Share My Date' feature will let you share date plans with friends and family

The upcoming feature will help you more easily share the location, date, and time of your date and a photo of your online match.

Germany arrests trio accused of trying to smuggle naval military tech to China

Prosecutors believe one frikkin' laser did make its way to Beijing

Germany has arrested three citizens who allegedly tried to transfer military technology to China, a violation of the country's export rules.…

ToddyCat Hacker Group Uses Advanced Tools for Industrial-Scale Data Theft

The threat actor known as ToddyCat has been observed using a wide range of tools to retain access to compromised environments and steal valuable data. Russian cybersecurity firm Kaspersky characterized the adversary as relying on various programs to harvest data on an "industrial scale" from primarily governmental organizations, some of them defense related, located in

How to Spot AI Audio Deepfakes at Election Time

We’ve said it several times in our blogs — it’s tough knowing what’s real and what’s fake out there. And that’s absolutely the case with AI audio deepfakes online. 

Bad actors of all stripes have found out just how easy, inexpensive, and downright uncanny AI audio deepfakes can be. With only a few minutes of original audio, seconds even, they can cook up phony audio that sounds like the genuine article — and wreak all kinds of havoc with it. 

A few high-profile cases in point, each politically motivated in an election year where the world will see more than 60 national elections: 

  • In January, thousands of U.S. voters in New Hampshire received an AI robocall that impersonated President Joe Biden, urging them not to vote in the primary 
  • In the UK, more than 100 deepfake social media ads impersonated Prime Minister Rishi Sunak on the Meta platform last December.i  
  • Similarly, the 2023 parliamentary elections in Slovakia spawned deepfake audio clips that featured false proposals for rigging votes and raising the price of beer.ii 

Yet deepfakes have targeted more than election candidates. Other public figures have found themselves attacked as well. One example comes from Baltimore County in Maryland, where a high school principal has allegedly fallen victim to a deepfake attack.  

It involves an offensive audio clip that resembles the principal’s voice which was posted on social media, news of which spread rapidly online. The school’s union has since stated that the clip was an AI deepfake, and an investigation is ongoing.iii In the wake of the attack, at least one expert in the field of AI deepfakes said that the clip is likely a deepfake, citing “distinct signs of digital splicing; this may be the result of several individual clips being synthesized separately and then combined.”iv 

And right there is the issue. It takes expert analysis to clinically detect if an audio clip is an AI deepfake. 

What makes audio deepfakes so hard to spot?  

Audio deepfakes give off far fewer clues, as compared to the relatively easier-to-spot video deepfakes out there. Currently, video deepfakes typically give off several clues, like poorly rendered hands and fingers, off-kilter lighting and reflections, a deadness to the eyes, and poor lip-syncing. Clearly, audio deepfakes don’t suffer any of those issues. That indeed makes them tough to spot. 

The implications of AI audio deepfakes online present themselves rather quickly. In a time where general awareness of AI audio deepfakes lags behind the availability and low cost of deepfake tools, people are more prone to believe an audio clip is real. Until “at home” AI detection tools become available to everyday people, skepticism is called for.  

Just as “seeing isn’t always believing” on the internet, we can “hearing isn’t always believing” on the internet as well. 

How to spot audio deepfakes. 

The people behind these attacks have an aim in mind. Whether it’s to spread disinformation, ruin a person’s reputation, or conduct some manner of scam, audio deepfakes look to do harm. In fact, that intent to harm is one of the signs of an audio deepfake, among several others. 

Listen to what’s actually being said. In many cases, bad actors create AI audio deepfakes designed to build strife, deepen divisions, or push outrageous lies. It’s an age-old tactic. By playing on people’s emotions, they ensure that people will spread the message in the heat of the moment. Is a political candidate asking you not to vote? Is a well-known public figure “caught” uttering malicious speech? Is Taylor Swift offering you free cookware? While not an outright sign of an AI audio deepfake alone, it’s certainly a sign that you should verify the source before drawing any quick conclusions. And certainly before sharing the clip. 

Think of the person speaking. If you’ve heard them speak before, does this sound like them? Specifically, does their pattern of speech ring true or does it pause in places it typically doesn’t … or speak more quickly and slowly than usual? AI audio deepfakes might not always capture these nuances. 

Listen to their language. What kind of words are they saying? Are they using vocabulary and turns of phrase they usually don’t? An AI can duplicate a person’s voice, yet it can’t duplicate their style. A bad actor still must write the “script” for the deepfake, and the phrasing they use might not sound like the target. 

Keep an ear out for edits. Some deepfakes stitch audio together. AI audio tools tend to work better with shorter clips, rather than feeding them one long script. Once again, this can introduce pauses that sound off in some way and ultimately affect the way the target of the deepfake sounds. 

Is the person breathing? Another marker of a possible fake is when the speaker doesn’t appear to breathe. AI tools don’t always account for this natural part of speech. It’s subtle, yet when you know to listen for it, you’ll notice it when a person doesn’t pause for breath. 

Living in a world of AI audio deepfakes. 

It’s upon us. Without alarmism, we should all take note that not everything we see, and now hear, on the internet is true. The advent of easy, inexpensive AI tools has made that a simple fact. 

The challenge that presents us is this — it’s largely up to us as individuals to sniff out a fake. Yet again, it comes down to our personal sense of internet street smarts. That includes a basic understanding of AI deepfake technology, what it’s capable of, and how fraudsters and bad actors put it to use. Plus, a healthy dose of level-headed skepticism. Both now in this election year and moving forward. 

[i] https://www.theguardian.com/technology/2024/jan/12/deepfake-video-adverts-sunak-facebook-alarm-ai-risk-election

[ii] https://www.bloomberg.com/news/articles/2023-09-29/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo

[iii] https://www.baltimoresun.com/2024/01/17/pikesville-principal-alleged-recording/

[iv] https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/

The post How to Spot AI Audio Deepfakes at Election Time appeared first on McAfee Blog.

Watchdog tells Dutch govt: 'Do not use Facebook if there is uncertainty about privacy'

Meta insists it's just misunderstood and it's safe to talk to citizens over FB

The Dutch Data Protection Authority (AP) has warned that government organizations should not use Facebook to communicate with the country's citizens unless they can guarantee the privacy of data.…

US House passes fresh TikTok ban proposal to Senate

Sadly no push to end stupid TikTok dances, but ByteDance would have year to offload app stateside

Fresh US legislation to force the sale of TikTok locally was passed in Washington over the weekend after an earlier version stalled in the Senate.…

Pentera's 2024 Report Reveals Hundreds of Security Events per Week, Highlighting the Criticality of Continuous Validation

Over the past two years, a shocking 51% of organizations surveyed in a leading industry report have been compromised by a cyberattack. Yes, over half.  And this, in a world where enterprises deploy an average of 53 different security solutions to safeguard their digital domain.  Alarming? Absolutely. A recent survey of CISOs and CIOs, commissioned by Pentera and

UK data watchdog questions how private Google's Privacy Sandbox is

Leaked draft report says stated goals still come up short

Google's Privacy Sandbox, which aspires to provide privacy-preserving ad targeting and analytics, still isn't sufficiently private.…

MITRE Corporation Breached by Nation-State Hackers Exploiting Ivanti Flaws

The MITRE Corporation revealed that it was the target of a nation-state cyber attack that exploited two zero-day flaws in Ivanti Connect Secure appliances starting in January 2024. The intrusion led to the compromise of its Networked Experimentation, Research, and Virtualization Environment (NERVE), an unclassified research and prototyping network. The unknown adversary "performed reconnaissance

Has the ever-present cyber danger just got worse?

Facing down the triple threat of ransomware, data breaches and criminal extortion

Sponsored On the face of it, there really isn't much of an upside for the current UK government after MPs described its response to attacks by cyber-espionage group APT31 as 'feeble, derisory and sadly insufficient.'…

Ransomware Double-Dip: Re-Victimization in Cyber Extortion

Between crossovers - Do threat actors play dirty or desperate? In our dataset of over 11,000 victim organizations that have experienced a Cyber Extortion / Ransomware attack, we noticed that some victims re-occur. Consequently, the question arises why we observe a re-victimization and whether or not this is an actual second attack, an affiliate crossover (meaning an affiliate has gone to

Researchers Uncover Windows Flaws Granting Hackers Rootkit-Like Powers

New research has found that the DOS-to-NT path conversion process could be exploited by threat actors to achieve rootkit-like capabilities to conceal and impersonate files, directories, and processes. "When a user executes a function that has a path argument in Windows, the DOS path at which the file or folder exists is converted to an NT path," SafeBreach security researcher Or Yair said&

Google all at sea over rising tide of robo-spam

What if it's not AI but the algorithm to blame?

Opinion It was a bold claim by the richest and most famous tech founder: bold, precise and wrong. Laughably so. Twenty years ago, Bill Gates promised to rid the world of spam by 2006. How's that worked out for you?…

Rarest, strangest, form of Windows saved techie from moment of security madness

For once, Redmond's finest saved the day - by being rubbish in unexpectedly useful ways

Who, Me? It's Monday once again, dear reader, and you know what that means: another dive into the Who, Me? confessional, to share stories of IT gone wrong that Reg readers managed to pretend had gone right.…

Microsoft Warns: North Korean Hackers Turn to AI-Fueled Cyber Espionage

Microsoft has revealed that North Korea-linked state-sponsored cyber actors have begun to use artificial intelligence (AI) to make its operations more effective and efficient. "They are learning to use tools powered by AI large language models (LLM) to make their operations more efficient and effective," the tech giant said in its latest report on East Asia hacking groups. The company

North Koreans Secretly Animated Amazon and Max Shows, Researchers Say

Thousands of exposed files on a misconfigured North Korean server hint at one way the reclusive country may evade international sanctions.

Researchers claim Windows Defender can be fooled into deleting databases

Two rounds of reports and patches may not have completely closed this hole

BLACK HAT ASIA Researchers at US/Israeli infosec outfit SafeBreach last Friday discussed flaws in Microsoft and Kaspersky security products that can potentially allow the remote deletion of files. And, they asserted, the hole could remain exploitable – even after both vendors claim to have patched the problem.…

China creates 'Information Support Force' to improve networked defence capabilities

A day after FBI boss warns Beijing is poised to strike against US infrastructure

China last week reorganized its military to create an Information Support Force aimed at ensuring it can fight and win networked wars.…

MITRE admits 'nation state' attackers touched its NERVE R&D operation

PLUS: Akira ransomware resurgent; Telehealth outfit fined for data-sharing; This week's nastiest vulns

Infosec In Brief In a cautionary tale that no one is immune from attack, the security org MITRE has admitted that it got pwned.…

Before yesterdayYour RSS feeds

[webapps] Laravel Framework 11 - Credential Leakage

Laravel Framework 11 - Credential Leakage

[webapps] SofaWiki 3.9.2 - Remote Command Execution (RCE) (Authenticated)

SofaWiki 3.9.2 - Remote Command Execution (RCE) (Authenticated)

[webapps] Flowise 1.6.5 - Authentication Bypass

Flowise 1.6.5 - Authentication Bypass

[webapps] FlatPress v1.3 - Remote Command Execution

FlatPress v1.3 - Remote Command Execution

[webapps] Wordpress Plugin Background Image Cropper v1.2 - Remote Code Execution

Wordpress Plugin Background Image Cropper v1.2 - Remote Code Execution

New RedLine Stealer Variant Disguised as Game Cheats Using Lua Bytecode for Stealth

A new information stealer has been found leveraging Lua bytecode for added stealth and sophistication, findings from McAfee Labs reveal. The cybersecurity firm has assessed it to be a variant of a known malware called RedLine Stealer owing to the fact that the command-and-control (C2) server IP address has been previously identified as associated with the malware. RedLine Stealer,&nbsp

Protecting yourself after a medical data breach – Week in security with Tony Anscombe

What are the risks and consequences of having your health data exposed and what are the steps to take if it happens to you?

The many faces of impersonation fraud: Spot an imposter before it’s too late

What are some of the most common giveaway signs that the person behind the screen or on the other end of the line isn’t who they claim to be?

The ABCs of how online ads can impact children’s well-being

From promoting questionable content to posing security risks, inappropriate ads present multiple dangers for children. Here’s how to help them stay safe.

Bitcoin scams, hacks and heists – and how to avoid them

Here’s how cybercriminals target cryptocurrencies and how you can keep your bitcoin or other crypto safe

eXotic Visit includes XploitSPY malware – Week in security with Tony Anscombe

Almost 400 people in India and Pakistan have fallen victim to an ongoing Android espionage campaign called eXotic Visit

Beyond fun and games: Exploring privacy risks in children’s apps

Should children’s apps come with ‘warning labels’? Here's how to make sure your children's digital playgrounds are safe places to play and learn.

eXotic Visit campaign: Tracing the footprints of Virtual Invaders

ESET researchers uncovered the eXotic Visit espionage campaign that targets users mainly in India and Pakistan with seemingly innocuous apps

7 reasons why cybercriminals want your personal data

Here's what drives cybercriminals to relentlessly target the personal information of other people – and why you need to guard your data like your life depends on it

The devil is in the fine print – Week in security with Tony Anscombe

Temu's cash giveaway where people were asked to hand over vast amounts of their personal data to the platform puts the spotlight on the data-slurping practices of online services today

How often should you change your passwords?

And is that actually the right question to ask? Here’s what else you should consider when it comes to keeping your accounts safe.

Malware hiding in pictures? More likely than you think

There is more to some images than meets the eye – their seemingly innocent façade can mask a sinister threat.

AI-Controlled Fighter Jets Are Dogfighting With Human Pilots Now

Plus: New York’s legislature suffers a cyberattack, police disrupt a global phishing operation, and Apple removes encrypted messaging apps in China.

Palo Alto Networks Discloses More Details on Critical PAN-OS Flaw Under Attack

Palo Alto Networks has shared more details of a critical security flaw impacting PAN-OS that has come under active exploitation in the wild by malicious actors. The company described the vulnerability, tracked as CVE-2024-3400 (CVSS score: 10.0), as "intricate" and a combination of two bugs in versions PAN-OS 10.2, PAN-OS 11.0, and PAN-OS 11.1 of the software. "In

Critical Update: CrushFTP Zero-Day Flaw Exploited in Targeted Attacks

Users of the CrushFTP enterprise file transfer software are being urged to update to the latest version following the discovery of a security flaw that has come under targeted exploitation in the wild. "CrushFTP v11 versions below 11.1 have a vulnerability where users can escape their VFS and download system files," CrushFTP said in an advisory released Friday.

Sacramento airport goes no-fly after AT&T internet cable snipped

Police say this appears to be a 'deliberate act.'

Sacramento International Airport (SMF) suffered hours of flight delays yesterday after what appears to be an intentional cutting of an AT&T internet cable serving the facility.…

How To Teach Your Kids About Deepfakes

Is it real? Is it fake? 

Deepfake technology has certainly made everything far more complicated online. How do you know for sure what’s real? Can you actually trust anything anymore? Recently, a Hong Kong company lost A$40 million in a deepfake scam after an employee transferred money following a video call with a scammer who looked like his boss! Even Oprah and Taylor have been affected by deepfake scammers using them to promote dodgy online schemes. So, how do we get our heads around it, and just as importantly, how do we help our kids understand it? Don’t stress – I got you. Here’s what you need to know. 

What Actually Is Deepfake Technology? 

Deepfake technology is essentially photoshopping on steroids. It’s when artificial intelligence is used to create videos, voice imitations, and images of people doing and saying things they never actually did. The ‘deep’ comes from the type of artificial intelligence that is used – deep learning. Deep learning trains computers to process data and make predictions in the same way the human brain does. 

When it first emerged around 2017, it was clunky and many of us could easily spot a deepfake however it is becoming increasingly sophisticated and convincing. And that’s the problem. It can be used to create great harm and disruption. Not only can it be used by scammers and dodgy operators to have celebrities promote their products, but it can also be used to undertake image abuse, create pornographic material, and manipulate the outcome of elections. 

How Are DeepFakes Made? 

When deepfakes first emerged they were clunky because they used a type of AI model called Generative Adversarial Network (or GAN). This is when specific parts of video footage or pictures are manipulated, quite commonly the mouth. You may remember when Australian mining magnate Andrew Forest was ‘deepfake’ into spruiking for a bogus ‘get rich quick’ scheme. This deepfake used GAN – as they manipulated just his mouth. 

But deepfakes are now even more convincing thanks to the use of a new type of generative AI called a diffusion model. This new technology means a deepfake can be created from scratch without having to even manipulate original content making the deepfake even more realistic.  

Experts and skilled scammers were the only ones who really had access to this technology until 2023 when it became widely available. Now, anyone who has a computer or phone and the right app (widely available) can make a deepfake.  

While it might take a novice scammer just a few minutes to create a deepfake, skilled hackers are able to produce very realistic deepfakes in just a few hours. 

Why Are Deepfakes Made? 

As I mentioned before, deepfakes are generated to either create harm or cause disruption. But a flurry of recent research is showing that creating deepfake pornographic videos is where most scammers are putting their energy. A recent study into deepfakes in 2023 found that deepfake pornography makes up a whopping 98% of all deepfake videos found online. And not surprisingly, 99% of the victims are women. The report also found that it now takes less than 25 minutes and costs nothing to create a 60-second deepfake pornographic video of anyone using just one clear face image! Wow!! 

Apart from pornography, they are often used for election tampering, identity theft, scam attempts and to spread fake news. In summary, nothing is off limits!  

How To Spot A Deepfake 

The ability to spot a deepfake is something we all need, given the potential harm they can cause. Here’s what to look out for: 

  • If it’s a video, check the audio matches the video i.e. is the audio synced to the lip movements? Check for unnatural blinking, odd lighting, misplaced shadows, or facial expressions that don’t match the tone of the voice. These might be the ‘older’ style of deepfakes, created using the GAN or ‘face-swap’ model. 
  • Deepfake videos and pictures created with the ‘face swap’ model may also look ‘off’ around the area where they have blended the face onto the original forehead. Check for colour and textual differences or perhaps an unusual hairline.   
  • The newer diffusion model means deepfakes can be harder to spot however look for asymmetries like unmatching earrings or eyes that are different sizes. They also don’t do hands very well, so check for the right number of fingers and ‘weird’ looking hands. 
  • A gut feeling! Even though the technology is becoming very sophisticated, it’s often possible to detect when it doesn’t seem quite right. There could be an awkwardness in body movement, a facial feature that isn’t quite right, an unusual background noise, or even weird looking teeth!! 

How To Protect Yourself 

There are two main ways you could be affected by deepfakes. First, as a victim e.g. being ‘cast’ in a deepfake pornographic video or photo. Secondly, by being influenced by a deepfake video that is designed to create harm e.g. scam, fake news, or even political disinformation. 

But the good news is that protecting yourself from deepfake technology is not dissimilar to protecting yourself from general online threats. Here are my top tips: 

Be Careful What You Share 

The best way to protect yourself from becoming a victim is to avoid sharing anything online at all. I appreciate that this perhaps isn’t totally realistic so instead, be mindful of what and where you share. Always have privacy settings set to the highest level and consider sharing your pics and videos with a select group instead of with all your online followers. Not only does this reduce the chances of your pictures making their way into the hands of deepfake scammers but it also increases the chance of finding the attacker if someone does in fact create a deepfake of you. 

Consider Watermarking Photos 

If you feel like you need to share pics and videos online, perhaps add a digital watermark to them. This will make it much harder for deepfake creators to use your images as it is a more complicated procedure that could possibly be traceable. 

Be Cautiously Suspicious Always 

Teach your kids to never assume that everything they see online is true or real. If you always operate with a sceptical mindset, then there is less of a chance that you will be caught up in a deepfake scam. If you find a video or photo that you aren’t sure about, do a reverse image search. Or check to see if it’s covered by trusted news websites, if it’s a news video. Remember, if what the person in the video is saying or doing is important, the mainstream news media will cover it. You can always fact check what the ‘person’ in the video is claiming as well. 

Use Multi-Factor Authentication 

Adding another layer of security to all your online accounts will make it that much harder for a deepfake creator to access your accounts and use your photos and videos. Multi-factor authentication or 2-factor authentication means you simply add an extra step to your login process. It could be a facial scan, a code sent to your smartphone, or even a code generated on an authenticator app like Google Authenticator. This is a complete no-brainer and probably adds no more than 30 seconds to the logging in process. 

Keep Your Software Updated 

Yes, this can make a huge difference. Software updates commonly include ‘patches’ or fixes for security vulnerabilities. So, if your software is out of date, it’s a little like having a broken window and then wondering why people can still get in! I recommend turning on automatic updates, so you don’t have to think about it. 

Passwords Are Key 

A weak password is also like having a broken window – it’s so much easier for deepfake scammers to access your accounts and your pics and videos. I know it seems like a lot of work but if every one of your online accounts has its own complex and individual password then you have a much greater chance of keeping the deepfake scammers away! 

So, be vigilant, always think critically, and remember you can report deepfake content to your law enforcement agency. In the US, that’s the FBI and in Australia, it is the eSafety Commissioner’s Office.

Stay safe all!

Alex 

The post How To Teach Your Kids About Deepfakes appeared first on McAfee Blog.

The Biggest Deepfake Porn Website Is Now Blocked in the UK

The world's most-visited deepfake website and another large competing site are stopping people in the UK from accessing them, days after the UK government announced a crackdown.
❌