The head of counterintelligence for a division of the Russian Federal Security Service (FSB) was sentenced last week to nine years in a penal colony for accepting a USD $1.7 million bribe to ignore the activities of a prolific Russian cybercrime group that hacked thousands of e-commerce websites. The protection scheme was exposed in 2022 when Russian authorities arrested six members of the group, which sold millions of stolen payment cards at flashy online shops like Trump’s Dumps.
A now-defunct carding shop that sold stolen credit cards and invoked 45’s likeness and name.
As reported by The Record, a Russian court last week sentenced former FSB officer Grigory Tsaregorodtsev for taking a $1.7 million bribe from a cybercriminal group that was seeking a “roof,” a well-placed, corrupt law enforcement official who could be counted on to both disregard their illegal hacking activities and run interference with authorities in the event of their arrest.
Tsaregorodtsev was head of the counterintelligence department for a division of the FSB based in Perm, Russia. In February 2022, Russian authorities arrested six men in the Perm region accused of selling stolen payment card data. They also seized multiple carding shops run by the gang, including Ferum Shop, Sky-Fraud, and Trump’s Dumps, a popular fraud store that invoked the 45th president’s likeness and promised to “make credit card fraud great again.”
All of the domains seized in that raid were registered by an IT consulting company in Perm called Get-net LLC, which was owned in part by Artem Zaitsev — one of the six men arrested. Zaitsev reportedly was a well-known programmer whose company supplied services and leasing to the local FSB field office.
The message for Trump’s Dumps users left behind by Russian authorities that seized the domain in 2022.
Russian news sites report that Internal Affairs officials with the FSB grew suspicious when Tsaregorodtsev became a little too interested in the case following the hacking group’s arrests. The former FSB agent had reportedly assured the hackers he could have their case transferred and that they would soon be free.
But when that promised freedom didn’t materialize, four the of the defendants pulled the walls down on the scheme and brought down their own roof. The FSB arrested Tsaregorodtsev, and seized $154,000 in cash, 100 gold bars, real estate and expensive cars.
At Tsaregorodtsev’s trial, his lawyers argued that their client wasn’t guilty of bribery per se, but that he did admit to fraud because he was ultimately unable to fully perform the services for which he’d been hired.
The Russian news outlet Kommersant reports that all four of those who cooperated were released with probation or correctional labor. Zaitsev received a sentence of 3.5 years in prison, and defendant Alexander Kovalev got four years.
In 2017, KrebsOnSecurity profiled Trump’s Dumps, and found the contact address listed on the site was tied to an email address used to register more than a dozen domains that were made to look like legitimate Javascript calls many e-commerce sites routinely make to process transactions — such as “js-link[dot]su,” “js-stat[dot]su,” and “js-mod[dot]su.”
Searching on those malicious domains revealed a 2016 report from RiskIQ, which shows the domains featured prominently in a series of hacking campaigns against e-commerce websites. According to RiskIQ, the attacks targeted online stores running outdated and unpatched versions of shopping cart software from Magento, Powerfront and OpenCart.
Those shopping cart flaws allowed the crooks to install “web skimmers,” malicious Javascript used to steal credit card details and other information from payment forms on the checkout pages of vulnerable e-commerce sites. The stolen customer payment card details were then sold on sites like Trump’s Dumps and Sky-Fraud.
We’ve said it several times in our blogs — it’s tough knowing what’s real and what’s fake out there. And that’s absolutely the case with AI audio deepfakes online.
Bad actors of all stripes have found out just how easy, inexpensive, and downright uncanny AI audio deepfakes can be. With only a few minutes of original audio, seconds even, they can cook up phony audio that sounds like the genuine article — and wreak all kinds of havoc with it.
A few high-profile cases in point, each politically motivated in an election year where the world will see more than 60 national elections:
Yet deepfakes have targeted more than election candidates. Other public figures have found themselves attacked as well. One example comes from Baltimore County in Maryland, where a high school principal has allegedly fallen victim to a deepfake attack.
It involves an offensive audio clip that resembles the principal’s voice which was posted on social media, news of which spread rapidly online. The school’s union has since stated that the clip was an AI deepfake, and an investigation is ongoing.iii In the wake of the attack, at least one expert in the field of AI deepfakes said that the clip is likely a deepfake, citing “distinct signs of digital splicing; this may be the result of several individual clips being synthesized separately and then combined.”iv
And right there is the issue. It takes expert analysis to clinically detect if an audio clip is an AI deepfake.
Audio deepfakes give off far fewer clues, as compared to the relatively easier-to-spot video deepfakes out there. Currently, video deepfakes typically give off several clues, like poorly rendered hands and fingers, off-kilter lighting and reflections, a deadness to the eyes, and poor lip-syncing. Clearly, audio deepfakes don’t suffer any of those issues. That indeed makes them tough to spot.
The implications of AI audio deepfakes online present themselves rather quickly. In a time where general awareness of AI audio deepfakes lags behind the availability and low cost of deepfake tools, people are more prone to believe an audio clip is real. Until “at home” AI detection tools become available to everyday people, skepticism is called for.
Just as “seeing isn’t always believing” on the internet, we can “hearing isn’t always believing” on the internet as well.
The people behind these attacks have an aim in mind. Whether it’s to spread disinformation, ruin a person’s reputation, or conduct some manner of scam, audio deepfakes look to do harm. In fact, that intent to harm is one of the signs of an audio deepfake, among several others.
Listen to what’s actually being said. In many cases, bad actors create AI audio deepfakes designed to build strife, deepen divisions, or push outrageous lies. It’s an age-old tactic. By playing on people’s emotions, they ensure that people will spread the message in the heat of the moment. Is a political candidate asking you not to vote? Is a well-known public figure “caught” uttering malicious speech? Is Taylor Swift offering you free cookware? While not an outright sign of an AI audio deepfake alone, it’s certainly a sign that you should verify the source before drawing any quick conclusions. And certainly before sharing the clip.
Think of the person speaking. If you’ve heard them speak before, does this sound like them? Specifically, does their pattern of speech ring true or does it pause in places it typically doesn’t … or speak more quickly and slowly than usual? AI audio deepfakes might not always capture these nuances.
Listen to their language. What kind of words are they saying? Are they using vocabulary and turns of phrase they usually don’t? An AI can duplicate a person’s voice, yet it can’t duplicate their style. A bad actor still must write the “script” for the deepfake, and the phrasing they use might not sound like the target.
Keep an ear out for edits. Some deepfakes stitch audio together. AI audio tools tend to work better with shorter clips, rather than feeding them one long script. Once again, this can introduce pauses that sound off in some way and ultimately affect the way the target of the deepfake sounds.
Is the person breathing? Another marker of a possible fake is when the speaker doesn’t appear to breathe. AI tools don’t always account for this natural part of speech. It’s subtle, yet when you know to listen for it, you’ll notice it when a person doesn’t pause for breath.
It’s upon us. Without alarmism, we should all take note that not everything we see, and now hear, on the internet is true. The advent of easy, inexpensive AI tools has made that a simple fact.
The challenge that presents us is this — it’s largely up to us as individuals to sniff out a fake. Yet again, it comes down to our personal sense of internet street smarts. That includes a basic understanding of AI deepfake technology, what it’s capable of, and how fraudsters and bad actors put it to use. Plus, a healthy dose of level-headed skepticism. Both now in this election year and moving forward.
[iii] https://www.baltimoresun.com/2024/01/17/pikesville-principal-alleged-recording/
[iv] https://www.scientificamerican.com/article/ai-audio-deepfakes-are-quickly-outpacing-detection/
The post How to Spot AI Audio Deepfakes at Election Time appeared first on McAfee Blog.
Imagine receiving a call from a loved one, only to discover it’s not them but a convincing replica created by voice cloning technology. This scenario might sound like something out of a sci-fi movie, but it became a chilling reality for a Brooklyn couple featured in a New Yorker article who thought their loved ones were being held for ransom. The perpetrators used voice cloning to extort money from the couple as they feared for the lives of the husband’s parents.
Their experience is a stark reminder of the growing threat of voice cloning attacks and the importance of safeguarding our voices in the digital age. Voice cloning, also known as voice synthesis or voice mimicry, is a technology that allows individuals to replicate someone else’s voice with remarkable accuracy. While initially developed for benign purposes such as voice assistants and entertainment, it has also become a tool for malicious actors seeking to exploit unsuspecting victims.
As AI tools become more accessible and affordable, the prevalence of deepfake attacks, including voice cloning, is increasing. So, how can you safeguard yourself and your loved ones against voice cloning attacks? Here are some practical steps to take:
Voice cloning attacks represent a new frontier in cybercrime. With vigilance and preparedness, it’s possible to mitigate the risks and protect yourself and your loved ones. By staying informed, establishing safeguards, and remaining skeptical of unexpected communications, you can thwart would-be attackers and keep your voice secure in an increasingly digitized world.
The post How to Protect Yourself Against AI Voice Cloning Attacks appeared first on McAfee Blog.
Cisco XDR is a leader in providing comprehensive threat detection and response across the entire attack surface. We’ll be showcasing new capabilities that will give security teams even more insight, a… Read more on Cisco Blogs
In an ever-evolving digital landscape, cybersecurity has become the cornerstone of organizational success. With the proliferation of sophisticated cyber threats, businesses must adopt a multi-layered… Read more on Cisco Blogs
The U.S. government is warning that “smart locks” securing entry to an estimated 50,000 dwellings nationwide contain hard-coded credentials that can be used to remotely open any of the locks. The lock’s maker Chirp Systems remains unresponsive, even though it was first notified about the critical weakness in March 2021. Meanwhile, Chirp’s parent company, RealPage, Inc., is being sued by multiple U.S. states for allegedly colluding with landlords to illegally raise rents.
On March 7, 2024, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) warned about a remotely exploitable vulnerability with “low attack complexity” in Chirp Systems smart locks.
“Chirp Access improperly stores credentials within its source code, potentially exposing sensitive information to unauthorized access,” CISA’s alert warned, assigning the bug a CVSS (badness) rating of 9.1 (out of a possible 10). “Chirp Systems has not responded to requests to work with CISA to mitigate this vulnerability.”
Matt Brown, the researcher CISA credits with reporting the flaw, is a senior systems development engineer at Amazon Web Services. Brown said he discovered the weakness and reported it to Chirp in March 2021, after the company that manages his apartment building started using Chirp smart locks and told everyone to install Chirp’s app to get in and out of their apartments.
“I use Android, which has a pretty simple workflow for downloading and decompiling the APK apps,” Brown told KrebsOnSecurity. “Given that I am pretty picky about what I trust on my devices, I downloaded Chirp and after decompiling, found that they were storing passwords and private key strings in a file.”
Using those hard-coded credentials, Brown found an attacker could then connect to an application programming interface (API) that Chirp uses which is managed by smart lock vendor August.com, and use that to enumerate and remotely lock or unlock any door in any building that uses the technology.
Update, April 18, 11:55 a.m. ET: August has provided a statement saying it does not believe August or Yale locks are vulnerable to the hack described by Brown.
“We were recently made aware of a vulnerability disclosure regarding access control systems provided by Chirp, using August and Yale locks in multifamily housing,” the company said. “Upon learning of these reports, we immediately and thoroughly investigated these claims. Our investigation found no evidence that would substantiate the vulnerability claims in either our product or Chirp’s as it relates to our systems.”
Update, April 25, 2:45 p.m. ET: Based on feedback from Chirp, CISA has downgraded the severity of this flaw and revised their security advisory to say that the hard-coded credentials do not appear to expose the devices to remote locking or unlocking. CISA says the hardcoded credentials could be used by an attacker within the range of Bluetooth (~30 meters) “to change the configuration settings within the Bluetooth beacon, effectively removing Bluetooth visibility from the device. This does not affect the device’s ability to lock or unlock access points, and access points can still be operated remotely by unauthorized users via other means.”
Brown said when he complained to his leasing office, they sold him a small $50 key fob that uses Near-Field Communications (NFC) to toggle the lock when he brings the fob close to his front door. But he said the fob doesn’t eliminate the ability for anyone to remotely unlock his front door using the exposed credentials and the Chirp mobile app.
Also, the fobs pass the credentials to his front door over the air in plain text, meaning someone could clone the fob just by bumping against him with a smartphone app made to read and write NFC tags.
Neither August nor Chirp Systems responded to requests for comment. It’s unclear exactly how many apartments and other residences are using the vulnerable Chirp locks, but multiple articles about the company from 2020 state that approximately 50,000 units use Chirp smart locks with August’s API.
Roughly a year before Brown reported the flaw to Chirp Systems, the company was bought by RealPage, a firm founded in 1998 as a developer of multifamily property management and data analytics software. In 2021, RealPage was acquired by the private equity giant Thoma Bravo.
Brown said the exposure he found in Chirp’s products is “an obvious flaw that is super easy to fix.”
“It’s just a matter of them being motivated to do it,” he said. “But they’re part of a private equity company now, so they’re not answerable to anybody. It’s too bad, because it’s not like residents of [the affected] properties have another choice. It’s either agree to use the app or move.”
In October 2022, an investigation by ProPublica examined RealPage’s dominance in the rent-setting software market, and that it found “uses a mysterious algorithm to help landlords push the highest possible rents on tenants.”
“For tenants, the system upends the practice of negotiating with apartment building staff,” ProPublica found. “RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money. One of the algorithm’s developers told ProPublica that leasing agents had ‘too much empathy’ compared to computer generated pricing.”
Last year, the U.S. Department of Justice threw its weight behind a massive lawsuit filed by dozens of tenants who are accusing the $9 billion apartment software company of helping landlords collude to inflate rents.
In February 2024, attorneys general for Arizona and the District of Columbia sued RealPage, alleging RealPage’s software helped create a rental monopoly.
‘Ensure your privacy settings are set to the highest level’ – if you’ve been reading my posts for a bit then you’ll know this is one of my top online safety tips. I’m a fan of ensuring that what you (and your kids) share online is limited to only the eyes that you trust. But let’s talk honestly. When was the last time you checked that your privacy settings were nice and tight? And what about your kids? While we all like to think they take our advice, do you think they have? Or it is all a bit complicated?
Research from McAfee confirms that the majority of us are keen to share our content online but with a tighter circle. In fact, 58% of social media users are keen to share content with only their family, friends, and followers but there’s a problem. Nearly half (46%) do not adjust their privacy settings on their social media platforms which means they’re likely sharing content with the entire internet!
And it’s probably no surprise why this is the case. When was the last time you tried to check your privacy settings? Could you even find them? Well, you are not alone with 55% of survey respondents confessing that they struggled to find the privacy settings on their social media platforms or even understand how they work.
Well, the good news is there is now a much easier way to decide exactly who you want to share with online. Introducing McAfee’s Social Privacy Manager. All you need to do is select your privacy preferences in a few quick clicks and McAfee will then adjust the privacy settings on your chosen social media accounts. Currently, McAfee’s software works with more than 100 platforms including LinkedIn, Google, Instagram, YouTube, and TikTok. It works across Android and iOS devices and on Windows and Mac computers also. The software is part of the McAfee+ suite.
Well, once you’ve got your social media privacy under control – you can relax – but just for a bit. Because there are a few other critical steps you need to take to ensure your online privacy is as protected as possible. Here’s what I recommend:
1. A Clever Password Strategy
In my opinion, passwords are one of the most powerful ways of protecting yourself online. If you have a weak and easily guessed password, you may as well not even bother. In an ideal world, every online account needs its own unique, complex password – think at least 12 characters, a combination of numbers, symbols, and both lower and upper case letters. I love using a crazy sentence. Better still, why not use a password manager that will create a password for you that no human could – and it will remember them for you too! A complete no-brainer!
2. Is Your Software Up To Date?
Software that is out of date is a little like leaving your windows and doors open and wondering why you might have an intruder. It exposes you to vulnerabilities and weaknesses that scammers can easily exploit. I always recommend setting your software to update automatically so take a little time to ensure yours is configured like this.
3. Think Critically Always
I encourage all my family members – both young and old – to always operate with a healthy dose of suspicion when going about their online business. Being mindful that not everything you see online is true is a powerful mindset. Whether it’s a sensational news article, a compelling ‘must have’ shopping deal, or a ‘TikTok’ influencer providing ‘tried and tested’ financial advice – it’s important to take a minute to think before acting. Always fact-check questionable news stories – you can use sites like Snopes. Why not ‘google’ to see if other customers have bad experiences with the shopping site that’s catching your eye? And if that TikTok influencer is really compelling, do some background research. But, if you have any doubts at all – walk away!
4. Wi-Fi – Think Before You Connect
Let’s be honest, Wi-Fi can be a godsend when you are travelling. If you don’t have mobile coverage and you need to check in on the kids then a Wi-Fi call is gold. But using public Wi-Fi can also be a risky business. So, use it sparingly and never ever conduct any financial transactions while connected to it – no exceptions! If you are a regular traveller, you might want to consider using a VPN to help you connect securely. A VPN will ensure that anything you send using Wi-Fi will be protected and unavailable to any potential prying eyes!
Keeping you and your family safe online is no easy feat. It’s time-consuming and let’s be honest sometimes quite overwhelming. If you have 3 kids and a partner and decided to manually update (or supervise them updating) their privacy settings then I reckon you’d be looking at least half a day’s work – plus all the associated negotiation! So, not only will McAfee’s Social Privacy Manager. ensure you and your loved ones have their social media privacy settings set nice and tight, it will also save you hours of work. And that my friends, is a good thing!
The post How Do You Manage Your Social Media Privacy? appeared first on McAfee Blog.
The ability to generate NetFlow from devices that do not natively produce it along with significant storage efficiency and improved workflows make for a significant update to CTB.
Cisco Telemetry… Read more on Cisco Blogs