Threat News Ledger

The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.
The server indicates that the URL has been redirected. Try using the Curl download option on the Syndicate Press Admin Panel Cache tab. After updating the settings, be sure to clear the input and output caches, then reload this page.

The following is the most recent public Cyber Threat news posted on Website

Failed to get content from 'http://feeds.feedburner.com/darknethackers'
Sorry, the http://krebsonsecurity.com/feed/ feed is not available at this time.
Sorry, the http://feeds.feedburner.com/NakedSecurity feed is not available at this time.
Sorry, the http://securelist.com/feed/ feed is not available at this time.
Failed to get content from 'http://Blog.malwarebytes.org/feed/'
Failed to get content from 'http://www.tripwire.com/state-of-security/feed/'
Sorry, the http://threatpost.com/feed feed is not available at this time.
Sorry, the http://www.tripwire.com/company/news/rss/all-feed feed is not available at this time.

Security Affairs

Read, think, share … Security is everyone's responsibility

Last feed update: Saturday March 9th, 2019 01:06:48 AM

FBI informed software giant Citrix of a security breach

Friday March 8th, 2019 10:52:39 PM Pierluigi Paganini
The American multinational software company Citrix disclosed a security breach, according to the firm an international cyber criminals gang gained access to its internal network. The American multinational software company Citrix is the last victim of a security breach, according to the company an international cyber criminal gang gained access to its internal network, Hackers […] The post FBI informed software giant Citrix of a security breach appeared first on Security Affairs.

Evading AV with JavaScript Obfuscation

Friday March 8th, 2019 12:41:33 PM Pierluigi Paganini
A few days ago, Cybaze-Yoroi ZLAB researchers spotted a suspicious JavaScript file that implemented several techniques to evade detection of all AV solutions. Introduction A few days ago, Cybaze-Yoroi ZLAB researchers spotted a suspicious JavaScript file needing further attention: it leveraged several techniques in order to evade all AV detection and no one of the […] The post Evading AV with JavaScript Obfuscation appeared first on Security Affairs.

Google discloses Windows zero-day actively exploited in targeted attacks

Friday March 8th, 2019 11:11:42 AM Pierluigi Paganini
Google this week revealed a Windows zero-day that is being actively exploited in targeted attacks alongside a recently fixed Chrome flaw. Google this week disclosed a Windows zero-day vulnerability that is being actively exploited in targeted attacks alongside a recently addressed flaw in Chrome flaw (CVE-2019-5786). The Windows zero-day vulnerability is a local privilege escalation […] The post Google discloses Windows zero-day actively exploited in targeted attacks appeared first on Security Affairs.

Zerodium $500,000 for VMware ESXi, Microsoft Hyper-V Exploits

Friday March 8th, 2019 09:09:02 AM Pierluigi Paganini
Zero-day broker firm Zerodium is offering up to $500,000 for VMware ESXi (vSphere) and Microsoft Hyper-V vulnerabilities. Exploit acquisition firm Zerodium is offering up to $500,000 for VMware ESXi and Microsoft Hyper-V vulnerabilities. The company is looking for exploits that allow guest-to-host escapes in default configurations to gain full access to the host. The overall […] The post Zerodium $500,000 for VMware ESXi, Microsoft Hyper-V Exploits appeared first on Security Affairs.

Research confirms rampant sale of SSL/TLS certificates on darkweb

Friday March 8th, 2019 07:35:15 AM Pierluigi Paganini
A study conducted by academics discovered that SSL and TLS certificates and associated services can be easily acquired from dark web marketplaces. A study sponsored by Venafi and conducted by researchers from Georgia State University in the U.S. and the University of Surrey in the U.K. discovered that SSL and TLS certificates and associated services […] The post Research confirms rampant sale of SSL/TLS certificates on darkweb appeared first on Security Affairs.

Cisco security updates fix dozens of flaws in Nexus Switches

Thursday March 7th, 2019 08:39:58 PM Pierluigi Paganini
Cisco released security updates to address over two dozen serious vulnerabilities affecting the Cisco Nexus switches. Cisco released security updates to address over two dozen serious vulnerabilities affecting the Cisco Nexus switches, including denial-of-service (DoS) issues, arbitrary code execution and privilege escalation flaws. Cisco published security advisories for most of the vulnerabilities, many of them impact the […] The post Cisco security updates fix dozens of flaws in Nexus Switches appeared first on Security Affairs.

StealthWorker Malware Uses Windows, Linux Bots to Hack Websites

Thursday March 7th, 2019 03:26:39 PM Pierluigi Paganini
Security experts at FortiGuard uncovered a new malware campaign aimed at delivering the StealthWorker brute-force malware. The malicious code targets both Windows and Linux systems, compromised systems are used to carry out brute force attacks along with other infected systems. The malicious code was first discovered by Malwarebytes at the end of February and tracked […] The post StealthWorker Malware Uses Windows, Linux Bots to Hack Websites appeared first on Security Affairs.

Microsoft warns of economic damages caused by Iran-linked hackers

Thursday March 7th, 2019 11:55:54 AM Pierluigi Paganini
Researchers at Microsoft warn of damages caused by cyber operations conducted by Iran-linked cyberespionage groups. Security experts at Microsoft are warning of economic damages caused by the activity of Iran-linked hacking groups that are working to penetrate systems, businesses, and governments worldwide. According to Microsoft, the attackers already caused hundreds of millions of dollars in […] The post Microsoft warns of economic damages caused by Iran-linked hackers appeared first on Security Affairs.

Too much UPnP-enabled connected devices still vulnerable to cyber attacks

Thursday March 7th, 2019 09:56:14 AM Pierluigi Paganini
UPnP-enabled devices running outdated software are exposed to a wide range of attacks exploiting known flaws in UPnP libraries. A broad range of UPnP-enabled devices running outdated software are exposed to attacks exploiting known flaws in UPnP libraries, Tony Yang, Home Network Researcher, has found 1,648,769 devices using the Shodan search engine, 35% were using […] The post Too much UPnP-enabled connected devices still vulnerable to cyber attacks appeared first on Security Affairs.

Whitefly espionage group was linked to SingHealth Singapore Healthcare Breach

Thursday March 7th, 2019 07:39:31 AM Pierluigi Paganini
Security experts at Symantec linked the massive Singapore Healthcare breach suffered by SingHealth to the ‘Whitefly’ cyberespionage group. In 2018, the largest healthcare group in Singapore, SingHealth, has suffered a massive data breach that exposed personal information of 1.5 million patients who visited the clinics of the company between May 2015 and July 2018. Stolen […] The post Whitefly espionage group was linked to SingHealth Singapore Healthcare Breach appeared first on Security Affairs.


Sorry, the http://feeds.feedburner.com/SansInstituteNewsbites feed is not available at this time.

WeLiveSecurity

WeLiveSecurity

Last feed update: Thursday November 6th, 2025 04:39:34 PM

Sharing is scaring: The WhatsApp screen-sharing scam you didn’t see coming

Wednesday November 5th, 2025 10:00:00 AM

How a fast-growing scam is tricking WhatsApp users into revealing their most sensitive financial and other data

How social engineering works | Unlocked 403 cybersecurity podcast (S2E6)

Tuesday November 4th, 2025 10:00:00 AM

Think you could never fall for an online scam? Think again. Here's how scammers could exploit psychology to deceive you – and what you can do to stay one step ahead

Ground zero: 5 things to do after discovering a cyberattack

Monday November 3rd, 2025 10:00:00 AM

When every minute counts, preparation and precision can mean the difference between disruption and disaster

This month in security with Tony Anscombe – October 2025 edition

Friday October 31st, 2025 10:00:00 AM

From the end of Windows 10 support to scams on TikTok and state-aligned hackers wielding AI, October's headlines offer a glimpse of what's shaping cybersecurity right now

Fraud prevention: How to help older family members avoid scams

Thursday October 30th, 2025 10:00:00 AM

Families that combine open communication with effective behavioral and technical safeguards can cut the risk dramatically

Cybersecurity Awareness Month 2025: When seeing isn't believing

Wednesday October 29th, 2025 10:00:00 AM

Deepfakes are blurring the line between real and fake and fraudsters are cashing in, using synthetic media for all manner of scams

Recruitment red flags: Can you spot a spy posing as a job seeker?

Tuesday October 28th, 2025 10:00:00 AM

Here’s what to know about a recent spin on an insider threat – fake North Korean IT workers infiltrating western firms

How MDR can give MSPs the edge in a competitive market

Monday October 27th, 2025 10:00:00 AM

With cybersecurity talent in short supply and threats evolving fast, managed detection and response is emerging as a strategic necessity for MSPs

Cybersecurity Awareness Month 2025: Cyber-risk thrives in the shadows

Friday October 24th, 2025 11:53:08 AM

Shadow IT leaves organizations exposed to cyberattacks and raises the risk of data loss and compliance failures

Gotta fly: Lazarus targets the UAV sector

Thursday October 23rd, 2025 04:00:00 AM

ESET research analyzes a recent instance of the Operation DreamJob cyberespionage campaign conducted by Lazarus, a North Korea-aligned APT group

SnakeStealer: How it preys on personal data – and how you can protect yourself

Wednesday October 22nd, 2025 09:00:00 AM

Here’s what to know about the malware with an insatiable appetite for valuable data, so much so that it tops this year's infostealer detection charts

Cybersecurity Awareness Month 2025: Building resilience against ransomware

Monday October 20th, 2025 02:11:52 PM

Ransomware rages on and no organization is too small to be targeted by cyber-extortionists. How can your business protect itself against the threat?

Minecraft mods: Should you 'hack' your game?

Thursday October 16th, 2025 09:00:00 AM

Some Minecraft mods don’t help build worlds – they break them. Here’s how malware can masquerade as a Minecraft mod.

IT service desks: The security blind spot that may put your business at risk

Wednesday October 15th, 2025 09:00:00 AM

Could a simple call to the helpdesk enable threat actors to bypass your security controls? Here’s how your team can close a growing security gap.

Cybersecurity Awareness Month 2025: Why software patching matters more than ever

Tuesday October 14th, 2025 02:21:37 PM

As the number of software vulnerabilities continues to increase, delaying or skipping security updates could cost your business dearly.

AI-aided malvertising: Exploiting a chatbot to spread scams

Monday October 13th, 2025 09:00:00 AM

Cybercriminals have tricked X’s AI chatbot into promoting phishing scams in a technique that has been nicknamed “Grokking”. Here’s what to know about it.

How Uber seems to know where you are – even with restricted location permissions

Thursday October 9th, 2025 09:00:00 AM

Is the ride-hailing app secretly tracking you? Not really, but this iOS feature may make it feel that way.

Cybersecurity Awareness Month 2025: Passwords alone are not enough

Wednesday October 8th, 2025 10:08:43 AM

Never rely on just a password, however strong it may be. Multi-factor authentication is essential for anyone who wants to protect their online accounts from intruders.

The case for cybersecurity: Why successful businesses are built on protection

Tuesday October 7th, 2025 09:00:00 AM

Company leaders need to recognize the gravity of cyber risk, turn awareness into action, and put security front and center

Beware of threats lurking in booby-trapped PDF files

Monday October 6th, 2025 09:00:00 AM

Looks can be deceiving, so much so that the familiar icon could mask malware designed to steal your data and money.

Manufacturing under fire: Strengthening cyber-defenses amid surging threats

Friday October 3rd, 2025 09:00:00 AM

Manufacturers operate in one of the most unforgiving threat environments and face a unique set of pressures that make attacks particularly damaging

New spyware campaigns target privacy-conscious Android users in the UAE

Thursday October 2nd, 2025 08:55:00 AM

ESET researchers have discovered campaigns distributing spyware disguised as Android Signal and ToTok apps, targeting users in the United Arab Emirates

Cybersecurity Awareness Month 2025: Knowledge is power

Wednesday October 1st, 2025 02:49:17 PM

We're kicking off the month with a focus on the human element: the first line of defense, but also the path of least resistance for many cybercriminals

This month in security with Tony Anscombe – September 2025 edition

Monday September 29th, 2025 10:00:49 AM

The past 30 days have seen no shortage of new threats and incidents that brought into sharp relief the need for well-thought-out cyber-resilience plans

Roblox executors: It’s all fun and games until someone gets hacked

Friday September 26th, 2025 09:00:00 AM

You could be getting more than you bargained for when you download that cheat tool promising quick wins

DeceptiveDevelopment: From primitive crypto theft to sophisticated AI-based deception

Thursday September 25th, 2025 08:59:34 AM

Malware operators collaborate with covert North Korean IT workers, posing a threat to both headhunters and job seekers

Watch out for SVG files booby-trapped with malware

Monday September 22nd, 2025 10:24:45 AM

What you see is not always what you get as cybercriminals increasingly weaponize SVG files as delivery vectors for stealthy malware

Gamaredon X Turla collab

Friday September 19th, 2025 08:55:00 AM

Notorious APT group Turla collaborates with Gamaredon, both FSB-associated groups, to compromise high‑profile targets in Ukraine

Small businesses, big targets: Protecting your business against ransomware

Thursday September 18th, 2025 09:00:00 AM

Long known to be a sweet spot for cybercriminals, small businesses are more likely to be victimized by ransomware than large enterprises

HybridPetya: The Petya/NotPetya copycat comes with a twist

Tuesday September 16th, 2025 11:33:45 AM

HybridPetya is the fourth publicly known real or proof-of-concept bootkit with UEFI Secure Boot bypass functionality

Introducing HybridPetya: Petya/NotPetya copycat with UEFI Secure Boot bypass

Friday September 12th, 2025 09:00:00 AM

UEFI copycat of Petya/NotPetya exploiting CVE-2024-7344 discovered on VirusTotal

Are cybercriminals hacking your systems – or just logging in?

Thursday September 11th, 2025 08:55:00 AM

As bad actors often simply waltz through companies’ digital front doors with a key, here’s how to keep your own door locked tight

Preventing business disruption and building cyber-resilience with MDR

Tuesday September 9th, 2025 09:00:00 AM

Given the serious financial and reputational risks of incidents that grind business to a halt, organizations need to prioritize a prevention-first cybersecurity strategy

Under lock and key: Safeguarding business data with encryption

Friday September 5th, 2025 08:53:00 AM

As the attack surface expands and the threat landscape grows more complex, it’s time to consider whether your data protection strategy is fit for purpose

GhostRedirector poisons Windows servers: Backdoors with a side of Potatoes

Thursday September 4th, 2025 08:55:00 AM

ESET researchers have identified a new threat actor targeting Windows servers with a passive C++ backdoor and a malicious IIS module that manipulates Google search results

This month in security with Tony Anscombe – August 2025 edition

Thursday August 28th, 2025 09:00:00 AM

From Meta shutting down millions of WhatsApp accounts linked to scam centers all the way to attacks at water facilities in Europe, August 2025 saw no shortage of impactful cybersecurity news

Don’t let “back to school” become “back to (cyber)bullying”

Wednesday August 27th, 2025 09:00:00 AM

Cyberbullying is a fact of life in our digital-centric society, but there are ways to push back

First known AI-powered ransomware uncovered by ESET Research

Tuesday August 26th, 2025 11:12:38 PM

The discovery of PromptLock shows how malicious use of AI models could supercharge ransomware and other threats

"What happens online stays online" and other cyberbullying myths, debunked

Thursday August 21st, 2025 09:00:00 AM

Separating truth from fiction is the first step towards making better parenting decisions. Let’s puncture some of the most common misconceptions about online harassment.

The need for speed: Why organizations are turning to rapid, trustworthy MDR

Tuesday August 19th, 2025 09:00:00 AM

How top-tier managed detection and response (MDR) can help organizations stay ahead of increasingly agile and determined adversaries

Investors beware: AI-powered financial scams swamp social media

Monday August 18th, 2025 09:00:00 AM

Can you tell the difference between legitimate marketing and deepfake scam ads? It’s not always as easy as you may think.

Supply-chain dependencies: Check your resilience blind spot

Tuesday August 12th, 2025 02:08:03 PM

Does your business truly understand its dependencies, and how to mitigate the risks posed by an attack on them?

How the always-on generation can level up its cybersecurity game

Tuesday August 12th, 2025 09:00:00 AM

Digital natives are comfortable with technology, but may be more exposed to online scams and other threats than they think

WinRAR zero-day exploited in espionage attacks against high-value targets

Monday August 11th, 2025 06:18:26 PM

The attacks used spearphishing campaigns to target financial, manufacturing, defense, and logistics companies in Europe and Canada, ESET research finds

Update WinRAR tools now: RomCom and others exploiting zero-day vulnerability

Monday August 11th, 2025 09:00:00 AM

ESET Research discovered a zero-day vulnerability in WinRAR being exploited in the wild in the guise of job application documents; the weaponized archives exploited a path traversal flaw to compromise their targets

Black Hat USA 2025: Is a high cyber insurance premium about your risk, or your insurer’s?

Friday August 8th, 2025 02:25:09 PM

A sky-high premium may not always reflect your company’s security posture

Android adware: What is it, and how do I get it off my device?

Friday August 8th, 2025 09:00:00 AM

Is your phone suddenly flooded with aggressive ads, slowing down performance or leading to unusual app behavior? Here’s what to do.

Black Hat USA 2025: Policy compliance and the myth of the silver bullet

Thursday August 7th, 2025 04:03:24 PM

Who’s to blame when the AI tool managing a company’s compliance status gets it wrong?

Black Hat USA 2025: Does successful cybersecurity today increase cyber-risk tomorrow?

Thursday August 7th, 2025 02:23:42 PM

Success in cybersecurity is when nothing happens, plus other standout themes from two of the event’s keynotes

ESET Threat Report H1 2025: ClickFix, infostealer disruptions, and ransomware deathmatch

Tuesday August 5th, 2025 09:00:00 AM

Threat actors are embracing ClickFix, ransomware gangs are turning on each other – toppling even the leaders – and law enforcement is disrupting one infostealer after another

Is your phone spying on you? | Unlocked 403 cybersecurity podcast (S2E5)

Friday August 1st, 2025 12:08:55 PM

Here's what you need to know about the inner workings of modern spyware and how to stay away from apps that know too much

Why the tech industry needs to stand firm on preserving end-to-end encryption

Friday August 1st, 2025 09:00:00 AM

Restricting end-to-end encryption on a single-country basis would not only be absurdly difficult to enforce, but it would also fail to deter criminal activity

This month in security with Tony Anscombe – July 2025 edition

Thursday July 31st, 2025 07:43:21 AM

Here's a look at cybersecurity stories that moved the needle, raised the alarm, or offered vital lessons in July 2025

The hidden risks of browser extensions – and how to stay safe

Tuesday July 29th, 2025 09:00:00 AM

Not all browser add-ons are handy helpers – some may contain far more than you have bargained for

SharePoint under fire: ToolShell attacks hit organizations worldwide

Friday July 25th, 2025 07:28:07 AM

The ToolShell bugs are being exploited by cybercriminals and APT groups alike, with the US on the receiving end of 13 percent of all attacks

ToolShell: An all-you-can-eat buffet for threat actors

Thursday July 24th, 2025 09:00:45 AM

ESET Research has been monitoring attacks involving the recently discovered ToolShell zero-day vulnerabilities

Rogue CAPTCHAs: Look out for phony verification pages spreading malware

Thursday July 24th, 2025 08:30:00 AM

Before rushing to prove that you're not a robot, be wary of deceptive human verification pages as an increasingly popular vector for delivering malware

Why is your data worth so much? | Unlocked 403 cybersecurity podcast (S2E4)

Tuesday July 22nd, 2025 10:18:17 AM

Behind every free online service, there's a price being paid. Learn why your digital footprint is so valuable, and when you might actually be the product.

Unmasking AsyncRAT: Navigating the labyrinth of forks

Tuesday July 15th, 2025 08:59:52 AM

ESET researchers map out the labyrinthine relationships among the vast hierarchy of AsyncRAT variants

How to get into cybersecurity | Unlocked 403 cybersecurity podcast (S2E3)

Friday July 4th, 2025 09:40:16 AM

Cracking the code of a successful cybersecurity career starts here. Hear from ESET's Robert Lipovsky as he reveals how to break into and thrive in this fast-paced field.

Task scams: Why you should never pay to get paid

Friday July 4th, 2025 09:00:00 AM

Some schemes might sound unbelievable, but they’re easier to fall for than you think. Here’s how to avoid getting played by gamified job scams.

How government cyber cuts will affect you and your business

Thursday July 3rd, 2025 09:00:00 AM

Deep cuts in cybersecurity spending risk creating ripple effects that will put many organizations at a higher risk of falling victim to cyberattacks

Gamaredon in 2024: Cranking out spearphishing campaigns against Ukraine with an evolved toolset

Wednesday July 2nd, 2025 08:58:22 AM

ESET Research analyzes Gamaredon’s updated cyberespionage toolset, new stealth-focused techniques, and aggressive spearphishing operations observed throughout 2024

ESET Threat Report H1 2025: Key findings

Tuesday July 1st, 2025 11:51:24 AM

ESET Chief Security Evangelist Tony Anscombe looks at some of the report's standout findings and their implications for organizations in 2025

ESET APT Activity Report Q4 2024–Q1 2025: Malware sharing, wipers and exploits

Tuesday July 1st, 2025 09:00:00 AM

ESET experts discuss Sandworm’s new data wiper, relentless campaigns by UnsolicitedBooker, attribution challenges amid tool-sharing, and other key findings from the latest APT Activity Report

This month in security with Tony Anscombe – June 2025 edition

Saturday June 28th, 2025 07:52:31 AM

From Australia's new ransomware payment disclosure rules to another record-breaking DDoS attack, June 2025 saw no shortage of interesting cybersecurity news

ESET Threat Report H1 2025

Thursday June 26th, 2025 09:38:02 AM

A view of the H1 2025 threat landscape as seen by ESET telemetry and from the perspective of ESET threat detection and research experts

BladedFeline: Whispering in the dark

Thursday June 5th, 2025 09:00:00 AM

ESET researchers analyzed a cyberespionage campaign conducted by BladedFeline, an Iran-aligned APT group with likely ties to OilRig

Don’t let dormant accounts become a doorway for cybercriminals

Monday June 2nd, 2025 09:00:00 AM

Do you have online accounts you haven't used in years? If so, a bit of digital spring cleaning might be in order.

This month in security with Tony Anscombe – May 2025 edition

Friday May 30th, 2025 09:00:00 AM

From a flurry of attacks targeting UK retailers to campaigns corralling end-of-life routers into botnets, it's a wrap on another month filled with impactful cybersecurity news

Word to the wise: Beware of fake Docusign emails

Tuesday May 27th, 2025 09:00:00 AM

Cybercriminals impersonate the trusted e-signature brand and send fake Docusign notifications to trick people into giving away their personal or corporate data

Danabot under the microscope

Friday May 23rd, 2025 11:43:50 AM

ESET Research has been tracking Danabot’s activity since 2018 as part of a global effort that resulted in a major disruption of the malware’s infrastructure

Danabot: Analyzing a fallen empire

Thursday May 22nd, 2025 08:03:14 PM

ESET Research shares its findings on the workings of Danabot, an infostealer recently disrupted in a multinational law enforcement operation

Lumma Stealer: Down for the count

Thursday May 22nd, 2025 02:53:19 PM

The bustling cybercrime enterprise has been dealt a significant blow in a global operation that relied on the expertise of ESET and other technology companies

ESET takes part in global operation to disrupt Lumma Stealer

Wednesday May 21st, 2025 04:15:00 PM

Our intense monitoring of tens of thousands of malicious samples helped this global disruption operation

The who, where, and how of APT attacks in Q4 2024–Q1 2025

Monday May 19th, 2025 05:17:43 PM

ESET Chief Security Evangelist Tony Anscombe highlights key findings from the latest issue of the ESET APT Activity Report

ESET APT Activity Report Q4 2024–Q1 2025

Monday May 19th, 2025 08:55:00 AM

An overview of the activities of selected APT groups investigated and analyzed by ESET Research in Q4 2024 and Q1 2025

Sednit abuses XSS flaws to hit gov't entities, defense companies

Thursday May 15th, 2025 01:15:04 PM

Operation RoundPress targets webmail software to steal secrets from email accounts belonging mainly to governmental organizations in Ukraine and defense contractors in the EU

Operation RoundPress

Thursday May 15th, 2025 07:22:04 AM

ESET researchers uncover a Russia-aligned espionage operation targeting webmail servers via XSS vulnerabilities

How can we counter online disinformation? | Unlocked 403 cybersecurity podcast (S2E2)

Monday May 12th, 2025 09:00:00 AM

Ever wondered why a lie can spread faster than the truth? Tune in for an insightful look at disinformation and how we can fight one of the most pressing challenges facing our digital world.

Catching a phish with many faces

Friday May 9th, 2025 09:00:00 AM

Here’s a brief dive into the murky waters of shape-shifting attacks that leverage dedicated phishing kits to auto-generate customized login pages on the fly

Beware of phone scams demanding money for ‘missed jury duty’

Wednesday May 7th, 2025 09:00:00 AM

When we get the call, it’s our legal responsibility to attend jury service. But sometimes that call won’t come from the courts – it will be a scammer.

Toll road scams are in overdrive: Here’s how to protect yourself

Tuesday May 6th, 2025 09:00:00 AM

Have you received a text message about an unpaid road toll? Make sure you’re not the next victim of a smishing scam.

RSAC 2025 wrap-up – Week in security with Tony Anscombe

Friday May 2nd, 2025 02:16:05 PM

From the power of collaborative defense to identity security and AI, catch up on the event's key themes and discussions

TheWizards APT group uses SLAAC spoofing to perform adversary-in-the-middle attacks

Wednesday April 30th, 2025 09:00:00 AM

ESET researchers analyzed Spellbinder, a lateral movement tool used to perform adversary-in-the-middle attacks

This month in security with Tony Anscombe – April 2025 edition

Tuesday April 29th, 2025 11:43:33 AM

From the near-demise of MITRE's CVE program to a report showing that AI outperforms elite red teamers in spearphishing, April 2025 was another whirlwind month in cybersecurity

How safe and secure is your iPhone really?

Monday April 28th, 2025 09:47:37 AM

Your iPhone isn't necessarily as invulnerable to security threats as you may think. Here are the key dangers to watch out for and how to harden your device against bad actors.

Deepfake 'doctors' take to TikTok to peddle bogus cures

Friday April 25th, 2025 09:00:00 AM

Look out for AI-generated 'TikDocs' who exploit the public's trust in the medical profession to drive sales of sketchy supplements

How fraudsters abuse Google Forms to spread scams

Wednesday April 23rd, 2025 09:00:00 AM

The form and quiz-building tool is a popular vector for social engineering and malware. Here’s how to stay safe.

Will super-smart AI be attacking us anytime soon?

Tuesday April 22nd, 2025 09:00:00 AM

What practical AI attacks exist today? “More than zero” is the answer – and they’re getting better.

CapCut copycats are on the prowl

Thursday April 17th, 2025 09:00:00 AM

Cybercriminals lure content creators with promises of cutting-edge AI wizardry, only to attempt to steal their data or hijack their devices instead

They’re coming for your data: What are infostealers and how do I stay safe?

Wednesday April 16th, 2025 09:00:00 AM

Here's what to know about malware that raids email accounts, web browsers, crypto wallets, and more – all in a quest for your sensitive data

Attacks on the education sector are surging: How can cyber-defenders respond?

Monday April 14th, 2025 09:00:00 AM

Academic institutions have a unique set of characteristics that makes them attractive to bad actors. What's the right antidote to cyber-risk?

Watch out for these traps lurking in search results

Thursday April 10th, 2025 09:00:00 AM

Here’s how to avoid being hit by fraudulent websites that scammers can catapult directly to the top of your search results

So your friend has been hacked: Could you be next?

Wednesday April 9th, 2025 09:00:00 AM

When a ruse puts on a familiar face, your guard might drop, making you an easy mark. Learn how to tell a friend apart from a foe.

1 billion reasons to protect your identity online

Tuesday April 8th, 2025 09:00:00 AM

Corporate data breaches are a gateway to identity fraud, but they’re not the only one. Here’s a lowdown on how your personal data could be stolen – and how to make sure it isn’t.

The good, the bad and the unknown of AI: A Q&A with Mária Bieliková

Thursday April 3rd, 2025 09:00:00 AM

The computer scientist and AI researcher shares her thoughts on the technology’s potential and pitfalls – and what may lie ahead for us

This month in security with Tony Anscombe – March 2025 edition

Monday March 31st, 2025 10:46:09 AM

From an exploited vulnerability in a third-party ChatGPT tool to a bizarre twist on ransomware demands, it's a wrap on another month filled with impactful cybersecurity news

Resilience in the face of ransomware: A key to business survival

Monday March 31st, 2025 09:00:00 AM

Your company’s ability to tackle the ransomware threat head-on can ultimately be a competitive advantage

Making it stick: How to get the most out of cybersecurity training

Friday March 28th, 2025 10:00:00 AM

Security awareness training doesn’t have to be a snoozefest – games and stories can help instill ‘sticky’ habits that will kick in when a danger is near


Sucuri Blog

Protect Your Interwebs!

Last feed update: Thursday November 6th, 2025 04:39:35 PM

FBI Public Service Annoucement: Defacements Exploiting WordPress Vulnerabilities

Wednesday April 8th, 2015 12:24:11 AM Daniel Cid
The US Federal Bureau of Investigation (FBI) just released a public service announcement (PSA) to the public about a large number of websites being exploited and compromised through WordPress plugin vulnerabilities: Continuous Web site defacements are being perpetrated by individuals sympathetic to the Islamic State in the Levant (ISIL) a.k.a. Islamic State of Iraq andRead More

Security Advisory: Persistent XSS in WP-Super-Cache

Tuesday April 7th, 2015 03:12:29 PM Marc-Alexandre Montpas
Security Risk: Dangerous Exploitation level: Very Easy/Remote DREAD Score: 8/10 Vulnerability: Persistent XSS Patched Version:  1.4.4 During a routine audit for our Website Firewall (WAF), we discovered a dangerous Persistent XSS vulnerability affecting the very popular WP-Super-Cache plugin (more than a million active installs according to wordpress.org). The security issue, as well as another bug-fixRead More

Website Malware – The SWF iFrame Injector Evolves

Thursday April 2nd, 2015 03:56:00 PM Peter Gramantik
Last year, we released a post about a malware injector found in an Adobe Flash (.SWF) file. In that post, we showed how a .SWF file is used to inject an invisible, malicious iFrame. It appears that the author of that Flash malware continued with this method of infection. Now we are seeing more varietiesRead More

Intro to E-Commerce and PCI Compliance – Part I

Tuesday March 31st, 2015 09:14:15 PM Daniel Cid
Have you ever heard of the term PCI? Specifically, PCI compliance? If you have an e-commerce website, you probably have already heard about it. But do you really understand what it means for you and your online business? In this series, we will try to explain the PCI standard and how it affects you andRead More

WordPress Malware Causes Psuedo-Darkleech Infection

Thursday March 26th, 2015 09:00:37 AM Denis Sinegubko
Darkleech is a nasty malware infection that infects web servers at the root level. It use malicious Apache modules to add hidden iFrames to certain responses. It’s difficult to detect because the malware is only active when both server and site admins are not logged in, and the iFrame is only injected once a dayRead More

Why Website Reinfections Happen

Tuesday March 24th, 2015 04:38:52 AM Valentin
I joined Sucuri a little over a month ago. My job is actually as a Social Media Specialist, but we have this process where regardless of your job you have to learn what website infections look like and more importantly, how to clean them. It’s this idea that regardless of you are you must alwaysRead More

The Impacts of a Hacked Website

Thursday March 19th, 2015 01:15:37 PM Tony Perez
Today, with the proliferation of open-source technologies like WordPress, Joomla! and other Content Management Systems (CMS) people around the world are able to quickly establish a virtual presence with little to no cost. In the process however, a lot is being lost in terms of what it means to own a website. We are failingRead More

Understanding WordPress Plugin Vulnerabilities

Tuesday March 17th, 2015 05:19:42 PM Daniel Cid
The last 7 days have been very busy with a number of vulnerabilities being disclosed on multiple WordPress plugins. Some of them are minor issues, some are more relevant, while others are what we’d categorize as noise. How are you supposed to make sense of all this? To help provide some clarity on the influxRead More

Inverted WordPress Trojan

Wednesday March 11th, 2015 06:40:16 PM Denis Sinegubko
Trojan (or trojan horse) is software that does (or pretends to be doing) something useful but also contains a secret malicious payload that inconspicuously does something bad. In WordPress, typical trojans are plugins and themes (usually pirated) which may have backdoors, or send out spam, create doorways, inject hidden links or malware. The trojan modelRead More

Security Advisory: MainWP-Child WordPress Plugin

Monday March 9th, 2015 11:56:20 PM Mickael Nadeau
Security Risk: Critical Exploitation level: Very Easy/Remote DREAD Score: 9/10 Vulnerability: Password bypass / Privilege Escalation Patched Version:  2.0.9.2 During a routine audit of our Website Firewall (WAF), we found a critical vulnerability affecting the popular MainWP Child WordPress plugin. According to worpdress.org, it is installed on more than 90,000 WordPress sites as as remote administrationRead More


Sorry, the http://feeds.feedburner.com/threattracksecurity feed is not available at this time.
Sorry, the http://feeds.feedblitz.com/alienvault-blogs feed is not available at this time.
Sorry, the http://hackmageddon.com/feed/ feed is not available at this time.
Sorry, the http://feeds.feedburner.com/SeculertResearchLab feed is not available at this time.
Failed to get content from 'http://news.netcraft.com/feed/'
Failed to get content from 'http://community.websense.com/Blogs/securitylabs/atom.aspx'
Sorry, the http://blog.beyondtrust.com/feed?post_type=post feed is not available at this time.

Google Online Security Blog

The latest news and insights from Google on security and safety on the Internet.

Last feed update: Thursday November 6th, 2025 04:39:38 PM

How Android provides the most effective protection to keep you safe from mobile scams

Friday October 31st, 2025 03:30:21 PM
Posted by Lyubov Farafonova, Product Manager, Phone by Google; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse; Vijay Pareek, Manager, Android Messaging Trust and Safety As Cybersecurity Awareness Month wraps up, we’re focusing on one of today's most pervasive digital threats: mobile scams. In the last 12 months, fraudsters have used advanced AI tools to create more convincing schemes, resulting in over $400 billion in stolen funds globally.¹ For years, Android has been on the frontlines in the battle against scammers, using the best of Google AI to build proactive, multi-layered protections that can anticipate and block scams before they reach you. Android’s scam defenses protect users around the world from over 10 billion suspected malicious calls and messages every month2. In addition, Google continuously performs safety checks to maintain the integrity of the RCS service. In the past month alone, this ongoing process blocked over 100 million suspicious numbers from using RCS, stopping potential scams before they could even be sent. To show how our scam protections work in the real world, we asked users and independent security experts to compare how well Android and iOS protect you from these threats. We're also releasing a new report that explains how modern text scams are orchestrated, helping you understand the tactics fraudsters use and how to spot them. Survey shows Android users’ confidence in scam protections Google and YouGov3 surveyed 5,000 smartphone users across the U.S., India, and Brazil about their experiences. The findings were clear: Android users reported receiving fewer scam texts and felt more confident that their device was keeping them safe. Android users were 58% more likely than iOS users to say they had not received any scam texts in the week prior to the survey. The advantage was even stronger on Pixel, where users were 96% more likely than iPhone owners to report zero scam texts. At the other end of the spectrum, iOS users were 65% more likely than Android users to report receiving three or more scam texts in a week. The difference became even more pronounced when comparing iPhone to Pixel, with iPhone users 136% more likely to say they had received a heavy volume of scam messages. Android users were 20% more likely than iOS users to describe their device’s scam protections as “very effective” or “extremely effective.” When comparing Pixel to iPhone, iPhone users were 150% more likely to say their device was not effective at all in stopping mobile fraud. YouGov study findings on users’ experience with scams on Android and iOS Security researchers and analysts highlight Android’s AI-driven safeguards against sophisticated scams In a recent evaluation by Counterpoint Research4, a global technology market research firm, Android smartphones were found to have the most AI-powered protections. The independent study compared the latest Pixel, Samsung, Motorola, and iPhone devices, and found that Android provides comprehensive AI-driven safeguards across ten key protection areas, including email protections, browsing protections, and on-device behavioral protections. By contrast, iOS offered AI-powered protections in only two categories. You can see the full comparison in the visual below. Counterpoint Research comparison of Android and iOS AI-powered protections Cybersecurity firm Leviathan Security Group conducted a funded evaluation5 of scam and fraud protection on the iPhone 17, Moto Razr+ 2025, Pixel 10 Pro, and Samsung Galaxy Z Fold 7. Their analysis found that Android smartphones, led by the Pixel 10 Pro, provide the highest level of default scam and fraud protection.The report particularly noted Android's robust call screening, scam detection, and real-time scam warning authentication capabilities as key differentiators. Taken together, these independent expert assessments conclude that Android’s AI-driven safeguards provide more comprehensive and intelligent protection against mobile scams. Leviathan Security Group comparison of scam protections across various devices Why Android users see fewer scams Android’s proactive protections work across the platform to help you stay ahead of threats with the best of Google AI. Here’s how they work: Keeping your messages safe: Google Messages automatically filters known spam by analyzing sender reputation and message content, moving suspicious texts directly to your "spam & blocked" folder to keep them out of sight. For more complex threats, Scam Detection uses on-device AI to analyze messages from unknown senders for patterns of conversational scams (like pig butchering) and provide real-time warnings6. This helps secure your privacy while providing a robust shield against text scams. As an extra safeguard, Google Messages also helps block suspicious links in messages that are determined to be spam or scams. Combatting phone call scams: Phone by Google automatically blocks known spam calls so your phone never even rings, while Call Screen5 can answer the call on your behalf to identify fraudsters. If you answer, the protection continues with Scam Detection, which uses on-device AI to provide real-time warnings for suspicious conversational patterns6. This processing is completely ephemeral, meaning no call content is ever saved or leaves your device. Android also helps stop social engineering during the call itself by blocking high-risk actions6 like installing untrusted apps or disabling security settings, and warns you if your screen is being shared unknowingly. These safeguards are built directly into the core of Android, alongside other features like real-time app scanning in Google Play Protect and enhanced Safe Browsing in Chrome using LLMs. With Android, you can trust that you have intelligent, multi-layered protection against scams working for you. Android is always evolving to keep you one step ahead of scams In a world of evolving digital threats, you deserve to feel confident that your phone is keeping you safe. That’s why we use the best of Google AI to build intelligent protections that are always improving and work for you around the clock, so you can connect, browse, and communicate with peace of mind. See these protections in action in our new infographic and learn more about phone call scams in our 2025 Phone by Google Scam Report. 1: Data from Global Anti-Scam Alliance, October 2025 ↩ 2: This total comprises all instances where a message or call was proactively blocked or where a user was alerted to potential spam or scam activity. ↩ 3: Google/YouGov survey, July-August 2025; n=5,100 across US, IN, BR ↩ 4: Google/Counterpoint Research, “Assessing the State of AI-Powered Mobile Security”, Oct. 2025; based on comparing the Pixel 10 Pro, iPhone 17 Pro, Samsung Galaxy S25 Ultra, OnePlus 13, Motorola Razr+ 2025. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries. ↩ 5. Google/Leviathan Security Group, “October 2025 Mobile Platform Security & Fraud Prevention Assessment”, Oct. 2025; based on comparing the Pixel 10 Pro, iPhone 17 Pro, Samsung Galaxy Z Fold 7 and Motorola Razr+ 2025. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries. ↩ ↩ 6. Accuracy may vary. Availability varies. ↩ ↩ ↩

HTTPS by default

Tuesday October 28th, 2025 09:16:37 PM
One year from now, with the release of Chrome 154 in October 2026, we will change the default settings of Chrome to enable “Always Use Secure Connections”. This means Chrome will ask for the user's permission before the first access to any public site without HTTPS. The “Always Use Secure Connections” setting warns users before accessing a site without HTTPS Chrome Security's mission is to make it safe to click on links. Part of being safe means ensuring that when a user types a URL or clicks on a link, the browser ends up where the user intended. When links don't use HTTPS, an attacker can hijack the navigation and force Chrome users to load arbitrary, attacker-controlled resources, and expose the user to malware, targeted exploitation, or social engineering attacks. Attacks like this are not hypothetical—software to hijack navigations is readily available and attackers have previously used insecure HTTP to compromise user devices in a targeted attack. Since attackers only need a single insecure navigation, they don't need to worry that many sites have adopted HTTPS—any single HTTP navigation may offer a foothold. What's worse, many plaintext HTTP connections today are entirely invisible to users, as HTTP sites may immediately redirect to HTTPS sites. That gives users no opportunity to see Chrome's "Not Secure" URL bar warnings after the risk has occurred, and no opportunity to keep themselves safe in the first place. To address this risk, we launched the “Always Use Secure Connections” setting in 2022 as an opt-in option. In this mode, Chrome attempts every connection over HTTPS, and shows a bypassable warning to the user if HTTPS is unavailable. We also previously discussed our intent to move towards HTTPS by default. We now think the time has come to enable “Always Use Secure Connections” for all users by default. Now is the time. For more than a decade, Google has published the HTTPS transparency report, which tracks the percentage of navigations in Chrome that use HTTPS. For the first several years of the report, numbers saw an impressive climb, starting at around 30-45% in 2015, and ending up around the 95-99% range around 2020. Since then, progress has largely plateaued. HTTPS adoption expressed as a percentage of main frame page loads This rise represents a tremendous improvement to the security of the web, and demonstrates that HTTPS is now mature and widespread. This level of adoption is what makes it possible to consider stronger mitigations against the remaining insecure HTTP. Balancing user safety with friction While it may at first seem that 95% HTTPS means that the problem is mostly solved, the truth is that a few percentage points of HTTP navigations is still a lot of navigations. Since HTTP navigations remain a regular occurrence for most Chrome users, a naive approach to warning on all HTTP navigations would be quite disruptive. At the same time, as the plateau demonstrates, doing nothing would allow this risk to persist indefinitely. To balance these risks, we have taken steps to ensure that we can help the web move towards safer defaults, while limiting the potential annoyance warnings will cause to users. One way we're balancing risks to users is by making sure Chrome does not warn about the same sites excessively. In all variants of the "Always Use Secure Connections" settings, so long as the user regularly visits an insecure site, Chrome will not warn the user about that site repeatedly. This means that rather than warn users about 1 out of 50 navigations, Chrome will only warn users when they visit a new (or not recently visited) site without using HTTPS. To further address the issue, it's important to understand what sort of traffic is still using HTTP. The largest contributor to insecure HTTP by far, and the largest contributor to variation across platforms, is insecure navigations to private sites. The graph above includes both those to public sites, such as example.com, and navigations to private sites, such as local IP addresses like 192.168.0.1, single-label hostnames, and shortlinks like intranet/. While it is free and easy to get an HTTPS certificate that is trusted by Chrome for a public site, acquiring an HTTPS certificate for a private site unfortunately remains complicated. This is because private names are "non-unique"—private names can refer to different hosts on different networks. There is no single owner of 192.168.0.1 for a certification authority to validate and issue a certificate to. HTTP navigations to private sites can still be risky, but are typically less dangerous than their public site counterparts because there are fewer ways for an attacker to take advantage of these HTTP navigations. HTTP on private sites can only be abused by an attacker also on your local network, like on your home wifi or in a corporate network. If you exclude navigations to private sites, then the distribution becomes much tighter across platforms. In particular, Linux jumps from 84% HTTPS to nearly 97% HTTPS when limiting the analysis to public sites only. Windows increases from 95% to 98% HTTPS, and both Android and Mac increase to over 99% HTTPS. In recognition of the reduced risk HTTP to private sites represents, last year we introduced a variant of “Always Use Secure Connections” for public sites only. For users who frequently access private sites (such as those in enterprise settings, or web developers), excluding warnings on private sites significantly reduces the volume of warnings those users will see. Simultaneously, for users who do not access private sites frequently, this mode introduces only a small reduction in protection. This is the variant we intend to enable for all users next year. “Always Use Secure Connections,” available at chrome://settings/security In Chrome 141, we experimented with enabling “Always Use Secure Connections” for public sites by default for a small percentage of users. We wanted to validate our expectations that this setting keeps users safer without burdening them with excessive warnings. Analyzing the data from the experiment, we confirmed that the number of warnings seen by any users is considerably lower than 3% of navigations—in fact, the median user sees fewer than one warning per week, and the ninety-fifth percentile user sees fewer than three warnings per week.. Understanding HTTP usage Once “Always Use Secure Connections” is the default and additional sites migrate away from HTTP, we expect the actual warning volume to be even lower than it is now. In parallel to our experiments, we have reached out to a number of companies responsible for the most HTTP navigations, and expect that they will be able to migrate away from HTTP before the change in Chrome 154. For many of these organizations, transitioning to HTTPS isn't disproportionately hard, but simply has not received attention. For example, many of these sites use HTTP only for navigations that immediately redirect to HTTPS sites—an insecure interaction which was previously completely invisible to users. Another current use case for HTTP is to avoid mixed content blocking when accessing devices on the local network. Private addresses, as discussed above, often do not have trusted HTTPS certificates, due to the difficulties of validating ownership of a non-unique name. This means most local network traffic is over HTTP, and cannot be initiated from an HTTPS page—the HTTP traffic counts as insecure mixed content, and is blocked. One common use case for needing to access the local network is to configure a local network device, e.g. the manufacturer might host a configuration portal at config.example.com, which then sends requests to a local device to configure it. Previously, these types of pages needed to be hosted without HTTPS to avoid mixed content blocking. However, we recently introduced a local network access permission, which both prevents sites from accessing the user’s local network without consent, but also allows an HTTPS site to bypass mixed content checks for the local network once the permission has been granted. This can unblock migrating these domains to HTTPS. Changes in Chrome We will enable the "Always Use Secure Connections" setting in its public-sites variant by default in October 2026, with the release of Chrome 154. Prior to enabling it by default for all users, in Chrome 147, releasing in April 2026, we will enable Always Use Secure Connections in its public-sites variant for the over 1 billion users who have opted-in to Enhanced Safe Browsing protections in Chrome. While it is our hope and expectation that this transition will be relatively painless for most users, users will still be able to disable the warnings by disabling the "Always Use Secure Connections" setting. If you are a website developer or IT professional, and you have users who may be impacted by this feature, we very strongly recommend enabling the "Always Use Secure Connections" setting today to help identify sites that you may need to work to migrate. IT professionals may find it useful to read our additional resources to better understand the circumstances where warnings will be shown, how to mitigate them, and how organizations that manage Chrome clients (like enterprises or educational institutions) can ensure that Chrome shows the right warnings to meet those organizations' needs. Looking Forward While we believe that warning on insecure public sites represents a significant step forward for the security of the web, there is still more work to be done. In the future, we hope to work to further reduce barriers to adoption of HTTPS, especially for local network sites. This work will hopefully enable even more robust HTTP protections down the road. Posted by Chris Thompson, Mustafa Emre Acer, Serena Chen, Joe DeBlasio, Emily Stark and David Adrian, Chrome Security Team

Accelerating adoption of AI for cybersecurity at DEF CON 33

Wednesday September 24th, 2025 07:07:00 PM
Posted by Elie Bursztein and Marianna Tishchenko, Google Privacy, Safety and Security TeamEmpowering cyber defenders with AI is critical to tilting the cybersecurity balance back in their favor as they battle cybercriminals and keep users safe. To help accelerate adoption of AI for cybersecurity workflows, we partnered with Airbus at DEF CON 33 to host the GenSec Capture the Flag (CTF), dedicated to human-AI collaboration in cybersecurity. Our goal was to create a fun, interactive environment, where participants across various skill levels could explore how AI can accelerate their daily cybersecurity workflows.At GenSec CTF, nearly 500 participants successfully completed introductory challenges, with 23% of participants using AI for cybersecurity for the very first time. An overwhelming 85% of all participants found the event useful for learning how AI can be applied to security workflows. This positive feedback highlights that AI-centric CTFs can play a vital role in speeding up AI education and adoption in the security community.The CTF also offered a valuable opportunity for the community to use Sec-Gemini, Google’s experimental Cybersecurity AI, as an optional assistant available in the UI alongside major LLMs. And we received great feedback on Sec-Gemini, with 77% of respondents saying that they had found Sec-Gemini either “very helpful” or “extremely helpful” in assisting them with solving the challenges.  We want to thank the DEF CON community for the enthusiastic participation and for making this inaugural event a resounding success. The community feedback during the event has been invaluable for understanding how to improve Sec-Gemini, and we are already incorporating some of the lessons learned into the next iteration. We are committed to advancing the AI cybersecurity frontier and will continue working with the community to build tools that help protect people online. Stay tuned as we plan to share more research and key learnings from the CTF with the broader community.

Supporting Rowhammer research to protect the DRAM ecosystem

Monday September 15th, 2025 05:01:03 PM
Posted by Daniel MoghimiRowhammer is a complex class of vulnerabilities across the industry. It is a hardware vulnerability in DRAM where repeatedly accessing a row of memory can cause bit flips in adjacent rows, leading to data corruption. This can be exploited by attackers to gain unauthorized access to data, escalate privileges, or cause denial of service. Hardware vendors have deployed various mitigations, such as ECC and Target Row Refresh (TRR) for DDR5 memory, to mitigate Rowhammer and enhance DRAM reliability. However, the resilience of those mitigations against sophisticated attackers remains an open question.To address this gap and help the ecosystem with deploying robust defenses, Google has supported academic research and developed test platforms to analyze DDR5 memory. Our effort has led to the discovery of new attacks and a deeper understanding of Rowhammer on the current DRAM modules, helping to forge the way for further, stronger mitigations.What is Rowhammer? Rowhammer exploits a vulnerability in DRAM. DRAM cells store data as electrical charges, but these electric charges leak over time, causing data corruption. To prevent data loss, the memory controller periodically refreshes the cells. However, if a cell discharges before the refresh cycle, its stored bit may corrupt. Initially considered a reliability issue, it has been leveraged by security researchers to demonstrate privilege escalation attacks. By repeatedly accessing a memory row, an attacker can cause bit flips in neighboring rows. An adversary can exploit Rowhammer via:Reliably cause bit flips by repeatedly accessing adjacent DRAM rows.Coerce other applications or the OS into using these vulnerable memory pages.Target security-sensitive code or data to achieve privilege escalation.Or simply corrupt system’s memory to cause denial of service. Previous work has repeatedly demonstrated the possibility of such attacks from software [Revisiting rowhammer, Are we susceptible to rowhammer?, Drammer,  Flip feng shui, Jolt]. As a result, defending against Rowhammer is required for secure isolation in multi-tenant environments like the cloud. Rowhammer Mitigations The primary approach to mitigate Rowhammer is to detect which memory rows are being aggressively accessed and refreshing nearby rows before a bit flip occurs. TRR is a common example, which uses a number of counters to track accesses to a small number of rows adjacent to a potential victim row. If the access count for these aggressor rows reaches a certain threshold, the system issues a refresh to the victim row. TRR can be incorporated within the DRAM or in the host CPU.However, this mitigation is not foolproof. For example, the TRRespass attack showed that by simultaneously hammering multiple, non-adjacent rows, TRR can be bypassed. Over the past couple of years, more sophisticated attacks [Half-Double, Blacksmith] have emerged, introducing more efficient attack patterns. In response, one of our efforts was to collaborate with JEDEC, external researchers, and experts to define the PRAC as a new mitigation that deterministically detects Rowhammer by tracking all memory rows. However, current systems equipped with DDR5 lack support for PRAC or other robust mitigations. As a result, they rely on probabilistic approaches such as ECC and enhanced TRR to reduce the risk. While these measures have mitigated older attacks, their overall effectiveness against new techniques was not fully understood until our recent findings.Challenges with Rowhammer Assessment Mitigating Rowhammer attacks involves making it difficult for an attacker to reliably cause bit flips from software. Therefore, for an effective mitigation, we have to understand how a determined adversary introduces memory accesses that bypass existing mitigations. Three key information components can help with such an analysis:How the improved TRR and in-DRAM ECC work.How memory access patterns from software translate into low-level DDR commands.(Optionally) How any mitigations (e.g., ECC or TRR) in the host processor work.The first step is particularly challenging and involves reverse-engineering the proprietary in-DRAM TRR mechanism, which varies significantly between different manufacturers and device models. This process requires the ability to issue precise DDR commands to DRAM and analyze its responses, which is difficult on an off-the-shelf system. Therefore, specialized test platforms are essential.The second and third steps involve analyzing the DDR traffic between the host processor and the DRAM. This can be done using an off-the-shelf interposer, a tool that sits between the processor and DRAM. A crucial part of this analysis is understanding how a live system translates software-level memory accesses into the DDR protocol.The third step, which involves analyzing host-side mitigations, is sometimes optional. For example, host-side ECC (Error Correcting Code) is enabled by default on servers, while host-side TRR has only been implemented in some CPUs. Rowhammer testing platformsFor the first challenge, we partnered with Antmicro to develop two specialized, open-source FPGA-based Rowhammer test platforms. These platforms allow us to conduct in-depth testing on different types of DDR5 modules.DDR5 RDIMM Platform: A new DDR5 Tester board to meet the hardware requirements of Registered DIMM (RDIMM) memory, common in server computers.SO-DIMM Platform: A version that supports the standard SO-DIMM pinout compatible with off-the-shelf DDR5 SO-DIMM memory sticks, common in workstations and end-user devices.Antmicro designed and manufactured these open-source platforms and we worked closely with them, and researchers from ETH Zurich, to test the applicability of these platforms for analyzing off-the-shelf memory modules in RDIMM and SO-DIMM forms.Antmicro DDR5 RDIMM FPGA test platform in action.Phoenix Attacks on DDR5In collaboration with researchers from ETH, we applied the new Rowhammer test platforms to evaluate the effectiveness of current in-DRAM DDR5 mitigations. Our findings, detailed in the recently co-authored "Phoenix” research paper, reveal that we successfully developed custom attack patterns capable of bypassing enhanced TRR (Target Row Refresh) defense on DDR5 memory. We were able to create a novel self-correcting refresh synchronization attack technique, which allowed us to perform the first-ever Rowhammer privilege escalation exploit on a standard, production-grade desktop system equipped with DDR5 memory. While this experiment was conducted on an off-the-shelf workstation equipped with recent AMD Zen processors and SK Hynix DDR5 memory, we continue to investigate the applicability of our findings to other hardware configurations.Lessons learned We showed that current mitigations for Rowhammer attacks are not sufficient, and the issue remains a widespread problem across the industry. They do make it more difficult “but not impossible” to carry out attacks, since an attacker needs an in-depth understanding of the specific memory subsystem architecture they wish to target.Current mitigations based on TRR and ECC rely on probabilistic countermeasures that have insufficient entropy. Once an analyst understands how TRR operates, they can craft specific memory access patterns to bypass it. Furthermore, current ECC schemes were not designed as a security measure and are therefore incapable of reliably detecting errors.Memory encryption is an alternative countermeasure for Rowhammer. However, our current assessment is that without cryptographic integrity, it offers no valuable defense against Rowhammer. More research is needed to develop viable, practical encryption and integrity solutions.Path forwardGoogle has been a leader in JEDEC standardization efforts, for instance with PRAC, a fully approved standard to be supported in upcoming versions of DDR5/LPDDR6. It works by accurately counting the number of times a DRAM wordline is activated and alerts the system if an excessive number of activations is detected. This close coordination between the DRAM and the system gives PRAC a reliable way to address Rowhammer. In the meantime, we continue to evaluate and improve other countermeasures to ensure our workloads are resilient against Rowhammer. We collaborate with our academic and industry partners to improve analysis techniques and test platforms, and to share our findings with the broader ecosystem.Want to learn more?“Phoenix: Rowhammer Attacks on DDR5 with Self-Correcting Synchronization” will be presented at IEEE Security & Privacy 2026 in San Francisco, CA (MAY 18-21, 2026).

How Pixel and Android are bringing a new level of trust to your images with C2PA Content Credentials

Wednesday September 10th, 2025 03:59:36 PM
Posted by Eric Lynch, Senior Product Manager, Android Security, and Sherif Hanna, Group Product Manager, Google C2PA Core At Made by Google 2025, we announced that the new Google Pixel 10 phones will support C2PA Content Credentials in Pixel Camera and Google Photos. This announcement represents a series of steps towards greater digital media transparency: The Pixel 10 lineup is the first to have Content Credentials built in across every photo created by Pixel Camera. The Pixel Camera app achieved Assurance Level 2, the highest security rating currently defined by the C2PA Conformance Program. Assurance Level 2 for a mobile app is currently only possible on the Android platform. A private-by-design approach to C2PA certificate management, where no image or group of images can be related to one another or the person who created them. Pixel 10 phones support on-device trusted time-stamps, which ensures images captured with your native camera app can be trusted after the certificate expires, even if they were captured when your device was offline. These capabilities are powered by Google Tensor G5, Titan M2 security chip, the advanced hardware-backed security features of the Android platform, and Pixel engineering expertise. In this post, we’ll break down our architectural blueprint for bringing a new level of trust to digital media, and how developers can apply this model to their own apps on Android. A New Approach to Content Credentials Generative AI can help us all to be more creative, productive, and innovative. But it can be hard to tell the difference between content that’s been AI-generated, and content created without AI. The ability to verify the source and history—or provenance—of digital content is more important than ever. Content Credentials convey a rich set of information about how media such as images, videos, or audio files were made, protected by the same digital signature technology that has secured online transactions and mobile apps for decades. It empowers users to identify AI-generated (or altered) content, helping to foster transparency and trust in generative AI. It can be complemented by watermarking technologies such as SynthID. Content Credentials are an industry standard backed by a broad coalition of leading companies for securely conveying the origin and history of media files. The standard is developed by the Coalition for Content Provenance and Authenticity (C2PA), of which Google is a steering committee member. The traditional approach to classifying digital image content has focused on categorizing content as “AI” vs. “not AI”. This has been the basis for many legislative efforts, which have required the labeling of synthetic media. This traditional approach has drawbacks, as described in Chapter 5 of this seminal report by Google. Research shows that if only synthetic content is labeled as “AI”, then users falsely believe unlabeled content is “not AI”, a phenomenon called “the implied truth effect”. This is why Google is taking a different approach to applying C2PA Content Credentials. Instead of categorizing digital content into a simplistic “AI” vs. “not AI”, Pixel 10 takes the first steps toward implementing our vision of categorizing digital content as either i) media that comes with verifiable proof of how it was made or ii) media that doesn't. Pixel Camera attaches Content Credentials to any JPEG photo capture, with the appropriate description as defined by the Content Credentials specification for each capture mode. Google Photos attaches Content Credentials to JPEG images that already have Content Credentials and are edited using AI or non-AI tools, and also to any images that are edited using AI tools. It will validate and display Content Credentials under a new section in the About panel, if the JPEG image being viewed contains this data. Learn more about it in Google Photos Help. Given the broad range of scenarios in which Content Credentials are attached by these apps, we designed our C2PA implementation architecture from the onset to be: Secure from silicon to applications Verifiable, not personally identifiable Useable offline Secure from Silicon to Applications Good actors in the C2PA ecosystem are motivated to ensure that provenance data is trustworthy. C2PA Certification Authorities (CAs), such as Google, are incentivized to only issue certificates to genuine instances of apps from trusted developers in order to prevent bad actors from undermining the system. Similarly, app developers want to protect their C2PA claim signing keys from unauthorized use. And of course, users want assurance that the media files they rely on come from where they claim. For these reasons, the C2PA defined the Conformance Program. The Pixel Camera application on the Pixel 10 lineup has achieved Assurance Level 2, the highest security rating currently defined by the C2PA Conformance Program. This was made possible by a strong set of hardware-backed technologies, including Tensor G5 and the certified Titan M2 security chip, along with Android’s hardware-backed security APIs. Only mobile apps running on devices that have the necessary silicon features and Android APIs can be designed to achieve this assurance level. We are working with C2PA to help define future assurance levels that will push protections even deeper into hardware. Achieving Assurance Level 2 requires verifiable, difficult-to-forge evidence. Google has built an end-to-end system on Pixel 10 devices that verifies several key attributes. However, the security of any claim is fundamentally dependent on the integrity of the application and the OS, an integrity that relies on both being kept current with the latest security patches. Hardware Trust: Android Key Attestation in Pixel 10 is built on support for Device Identifier Composition Engine (DICE) by Tensor, and Remote Key Provisioning (RKP) to establish a trust chain from the moment the device starts up to the OS, stamping out the most common forms of abuse on Android. Genuine Device and Software: Aided by the hardware trust described above, Android Key Attestation allows Google C2PA Certification Authorities (CAs) to verify that they are communicating with a genuine physical device. It also allows them to verify the device has booted securely into a Play Protect Certified version of Android, and verify how recently the operating system, bootloader, and system software and firmware were patched for security vulnerabilities. Genuine Application: Hardware-backed Android Key Attestation certificates include the package name and signing certificates associated with the app that requested the generation of the C2PA signing key, allowing Google C2PA CAs to check that the app requesting C2PA claim signing certificates is a trusted, registered app. Tamper-Resistant Key Storage: On Pixel, C2PA claim signing keys are generated and stored using Android StrongBox in the Titan M2 security chip. Titan M2 is Common Criteria PP.0084 AVA_VAN.5 certified, meaning that it is strongly resistant to extracting or tampering with the cryptographic keys stored in it. Android Key Attestation allows Google C2PA CAs to verify that private keys were indeed created inside this hardware-protected vault before issuing certificates for their public key counterparts. The C2PA Conformance Program requires verifiable artifacts backed by a hardware Root of Trust, which Android provides through features like Key Attestation. This means Android developers can leverage these same tools to build apps that meet this standard for their users. Privacy Built on a Foundation of Trust: Verifiable, Not Personally Identifiable The robust security stack we described is the foundation of privacy. But Google takes steps further to ensure your privacy even as you use Content Credentials, which required solving two additional challenges: Challenge 1: Server-side Processing of Certificate Requests. Google’s C2PA Certification Authorities must certify new cryptographic keys generated on-device. To prevent fraud, these certificate enrollment requests need to be authenticated. A more common approach would require user accounts for authentication, but this would create a server-side record linking a user's identity to their C2PA certificates—a privacy trade-off we were unwilling to make. Our Solution: Anonymous, Hardware-Backed Attestation. We solve this with Android Key Attestation, which allows Google CAs to verify what is being used (a genuine app on a secure device) without ever knowing who is using it (the user). Our CAs also enforce a strict no-logging policy for information like IP addresses that could tie a certificate back to a user. Challenge 2: The Risk of Traceability Through Key Reuse. A significant privacy risk in any provenance system is traceability. If the same device or app-specific cryptographic key is used to sign multiple photos, those images can be linked by comparing the key. An adversary could potentially connect a photo someone posts publicly under their real name with a photo they post anonymously, deanonymizing the creator. Our Solution: Unique Certificates. We eliminate this threat with a maximally private approach. Each key and certificate is used to sign exactly one image. No two images ever share the same public key, a "One-and-Done" Certificate Management Strategy, making it cryptographically impossible to link them. This engineering investment in user privacy is designed to set a clear standard for the industry. Overall, you can use Content Credentials on Pixel 10 without fear that another person or Google could use it to link any of your images to you or one another. Ready to Use When You Are - Even Offline Implementations of Content Credentials use trusted time-stamps to ensure the credentials can be validated even after the certificate used to produce them expires. Obtaining these trusted time-stamps typically requires connectivity to a Time-Stamping Authority (TSA) server. But what happens if the device is offline? This is not a far-fetched scenario. Imagine you’ve captured a stunning photo of a remote waterfall. The image has Content Credentials that prove that it was captured by a camera, but the cryptographic certificate used to produce them will eventually expire. Without a time-stamp, that proof could become untrusted, and you're too far from a cell signal, which is required to receive one. To solve this, Pixel developed an on-device, offline TSA. Powered by the security features of Tensor, Pixel maintains a trusted clock in a secure environment, completely isolated from the user-controlled one in Android. The clock is synchronized regularly from a trusted source while the device is online, and is maintained even after the device goes offline (as long as the phone remains powered on). This allows your device to generate its own cryptographically-signed time-stamps the moment you press the shutter—no connection required. It ensures the story behind your photo remains verifiable and trusted after its certificate expires, whether you took it in your living room or at the top of a mountain. Building a More Trustworthy Ecosystem, Together C2PA Content Credentials are not the sole solution for identifying the provenance of digital media. They are, however, a tangible step toward more media transparency and trust as we continue to unlock more human creativity with AI. In our initial implementation of Content Credentials on the Android platform and Pixel 10 lineup, we prioritized a higher standard of privacy, security, and usability. We invite other implementers of Content Credentials to evaluate our approach and leverage these same foundational hardware and software security primitives. The full potential of these technologies can only be realized through widespread ecosystem adoption. We look forward to adding Content Credentials across more Google products in the near future.

Android’s pKVM Becomes First Globally Certified Software to Achieve Prestigious SESIP Level 5 Security Certification

Monday October 6th, 2025 01:12:36 AM
Posted by Dave Kleidermacher, VP Engineering, Android Security & Privacy Today marks a watershed moment and new benchmark for open-source security and the future of consumer electronics. Google is proud to announce that protected KVM (pKVM), the hypervisor that powers the Android Virtualization Framework, has officially achieved SESIP Level 5 certification. This makes pKVM the first software security system designed for large-scale deployment in consumer electronics to meet this assurance bar. Supporting Next-Gen Android Features The implications for the future of secure mobile technology are profound. With this level of security assurance, Android is now positioned to securely support the next generation of high-criticality isolated workloads. This includes vital features, such as on-device AI workloads that can operate on ultra-personalized data, with the highest assurances of privacy and integrity. This certification required a hands-on evaluation by Dekra, a globally recognized cybersecurity certification lab, which conducted an evaluation against the TrustCB SESIP scheme, compliant to EN-17927. Achieving Security Evaluation Standard for IoT Platforms (SESIP) Level 5 is a landmark because it incorporates AVA_VAN.5, the highest level of vulnerability analysis and penetration testing under the ISO 15408 (Common Criteria) standard. A system certified to this level has been evaluated to be resistant to highly skilled, knowledgeable, well-motivated, and well-funded attackers who may have insider knowledge and access. This certification is the cornerstone of the next-generation of Android’s multi-layered security strategy. Many of the TEEs (Trusted Execution Environments) used in the industry have not been formally certified or have only achieved lower levels of security assurance. This inconsistency creates a challenge for developers looking to build highly critical applications that require a robust and verifiable level of security. The certified pKVM changes this paradigm entirely. It provides a single, open-source, and exceptionally high-quality firmware base that all device manufacturers can build upon. Looking ahead, Android device manufacturers will be required to use isolation technology that meets this same level of security for various security operations that the device relies on. Protected KVM ensures that every user can benefit from a consistent, transparent, and verifiably secure foundation. A Collaborative Effort This achievement represents just one important aspect of the immense, multi-year dedication from the Linux and KVM developer communities and multiple engineering teams at Google developing pKVM and AVF. We look forward to seeing the open-source community and Android ecosystem continue to build on this foundation, delivering a new era of high-assurance mobile technology for users.

Introducing OSS Rebuild: Open Source, Rebuilt to Last

Monday August 4th, 2025 09:40:56 PM
Posted by Matthew Suozzo, Google Open Source Security Team (GOSST)Today we're excited to announce OSS Rebuild, a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts. As supply chain attacks continue to target widely-used dependencies, OSS Rebuild gives security teams powerful data to avoid compromise without burden on upstream maintainers.The project comprises:Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.ChallengesOpen source software has become the foundation of our digital world. From critical infrastructure to everyday applications, OSS components now account for 77% of modern applications. With an estimated value exceeding $12 trillion, open source software has never been more integral to the global economy.Yet this very ubiquity makes open source an attractive target: Recent high-profile supply chain attacks have demonstrated sophisticated methods for compromising widely-used packages. Each incident erodes trust in open ecosystems, creating hesitation among both contributors and consumers.The security community has responded with initiatives like OpenSSF Scorecard, pypi's Trusted Publishers, and npm's native SLSA support. However, there is no panacea: Each effort targets a certain aspect of the problem, often making tradeoffs like shifting work onto publishers and maintainers.Our AimOur aim with OSS Rebuild is to empower the security community to deeply understand and control their supply chains by making package consumption as transparent as using a source repository. Our rebuild platform unlocks this transparency by utilizing a declarative build process, build instrumentation, and network monitoring capabilities which, within the SLSA Build framework, produces fine-grained, durable, trustworthy security metadata.Building on the hosted infrastructure model that we pioneered with OSS Fuzz for memory issue detection, OSS Rebuild similarly seeks to use hosted resources to address security challenges in open source, this time aimed at securing the software supply chain.Our vision extends beyond any single ecosystem: We are committed to bringing supply chain transparency and security to all open source software development. Our initial support for the PyPI (Python), npm (JS/TS), and Crates.io (Rust) package registries—providing rebuild provenance for many of their most popular packages—is just the beginning of our journey.How OSS Rebuild WorksThrough automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression). Once we reproduce the package, we publish the build definition and outcome via SLSA Provenance. This attestation allows consumers to reliably verify a package's origin within the source history, understand and repeat its build process, and customize the build from a known-functional baseline (or maybe even use it to generate more detailed SBOMs).With OSS Rebuild's existing automation for PyPI, npm, and Crates.io, most packages obtain protection effortlessly without user or maintainer intervention. Where automation isn't currently able to fully reproduce the package, we offer manual build specification so the whole community benefits from individual contributions.And we are also excited at the potential for AI to help reproduce packages: Build and release processes are often described in natural language documentation which, while difficult to utilize with discrete logic, is increasingly useful to language models. Our initial experiments have demonstrated the approach's viability in automating exploration and testing, with limited human intervention, even in the most complex builds.Our CapabilitiesOSS Rebuild helps detect several classes of supply chain compromise:Unsubmitted Source Code - When published packages contain code not present in the public source repository, OSS Rebuild will not attest to the artifact.Real world attack: solana/webjs (2024)Build Environment Compromise - By creating standardized, minimal build environments with comprehensive monitoring, OSS Rebuild can detect suspicious build activity or avoid exposure to compromised components altogether.Real world attack: tj-actions/changed-files (2025)Stealthy Backdoors - Even sophisticated backdoors like xz often exhibit anomalous behavioral patterns during builds. OSS Rebuild's dynamic analysis capabilities can detect unusual execution paths or suspicious operations that are otherwise impractical to identify through manual review.Real world attack: xz-utils (2024)For enterprises and security professionals, OSS Rebuild can...Enhance metadata without changing registries by enriching data for upstream packages. No need to maintain custom registries or migrate to a new package ecosystem.Augment SBOMs by adding detailed build observability information to existing Software Bills of Materials, creating a more complete security picture.Accelerate vulnerability response by providing a path to vendor, patch, and re-host upstream packages using our verifiable build definitions.For publishers and maintainers of open source packages, OSS Rebuild can...Strengthen package trust by providing consumers with independent verification of the packages' build integrity, regardless of the sophistication of the original build.Retrofit historical packages' integrity with high-quality build attestations, regardless of whether build attestations were present or supported at the time of publication.Reduce CI security-sensitivity allowing publishers to focus on core development work. CI platforms tend to have complex authorization and execution models and by performing separate rebuilds, the CI environment no longer needs to be load-bearing for your packages' security.Check it out!The easiest (but not only!) way to access OSS Rebuild attestations is to use the provided Go-based command-line interface. It can be compiled and installed easily:$ go install github.com/google/oss-rebuild/cmd/oss-rebuild@latestYou can fetch OSS Rebuild's SLSA Provenance:$ oss-rebuild get cratesio syn 2.0.39..or explore the rebuilt versions of a particular package:$ oss-rebuild list pypi absl-py..or even rebuild the package for yourself:$ oss-rebuild get npm lodash 4.17.20 --output=dockerfile | \   docker run $(docker buildx build -q -)Join Us in Helping Secure Open SourceOSS Rebuild is not just about fixing problems; it's about empowering end-users to make open source ecosystems more secure and transparent through collective action. If you're a developer, enterprise, or security researcher interested in OSS security, we invite you to follow along and get involved!Check out the code, share your ideas, and voice your feedback at github.com/google/oss-rebuild.Explore the data and contribute to improving support for your critical ecosystems and packages.Learn more about SLSA Provenance at slsa.dev

Advancing Protection in Chrome on Android

Tuesday July 8th, 2025 05:36:01 PM
Posted by David Adrian, Javier Castro & Peter Kotwicz, Chrome Security Team Android recently announced Advanced Protection, which extends Google’s Advanced Protection Program to a device-level security setting for Android users that need heightened security—such as journalists, elected officials, and public figures. Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re better protected against the most sophisticated threats. Advanced Protection acts as a single control point for at-risk users on Android that enables important security settings across applications, including many of your favorite Google apps, including Chrome. In this post, we’d like to do a deep dive into the Chrome features that are integrated with Advanced Protection, and how enterprises and users outside of Advanced Protection can leverage them. Android Advanced Protection integrates with Chrome on Android in three main ways: Enables the “Always Use Secure Connections” setting for both public and private sites, so that users are protected from attackers reading confidential data or injecting malicious content into insecure plaintext HTTP connections. Insecure HTTP represents less than 1% of page loads for Chrome on Android. Enables full Site Isolation on mobile devices with 4GB+ RAM, so that potentially malicious sites are never loaded in the same process as legitimate websites. Desktop Chrome clients already have full Site Isolation. Reduces attack surface by disabling Javascript optimizations, so that Chrome has a smaller attack surface and is harder to exploit. Let’s take a look at all three, learn what they do, and how they can be controlled outside of Advanced Protection. Always Use Secure Connections “Always Use Secure Connections” (also known as HTTPS-First Mode in blog posts and HTTPS-Only Mode in the enterprise policy) is a Chrome setting that forces HTTPS wherever possible, and asks for explicit permission from you before connecting to a site insecurely. There may be attackers attempting to interpose on connections on any network, whether that network is a coffee shop, airport, or an Internet backbone. This setting protects users from these attackers reading confidential data and injecting malicious content into otherwise innocuous webpages. This is particularly useful for Advanced Protection users, since in 2023, plaintext HTTP was used as an exploitation vector during the Egyptian election. Beyond Advanced Protection, we previously posted about how our goal is to eventually enable “Always Use Secure Connections” by default for all Chrome users. As we work towards this goal, in the last two years we have quietly been enabling it in more places beyond Advanced Protection, to help protect more users in risky situations, while limiting the number of warnings users might click through: We added a new variant of the setting that only warns on public sites, and doesn’t warn on local networks or single-label hostnames (e.g. 192.168.0.1, shortlink/, 10.0.0.1). These names often cannot be issued a publicly-trusted HTTPS certificate. This variant protects against most threats—accessing a public website insecurely—but still allows for users to access local sites, which may be on a more trusted network, without seeing a warning. We’ve automatically enabled “Always Use Secure Connections” for public sites in Incognito Mode for the last year, since Chrome 127 in June 2024. We automatically prevent downgrades from HTTPS to plaintext HTTP on sites that Chrome knows you typically access over HTTPS (a heuristic version of the HSTS header), since Chrome 133 in January 2025.Always Use Secure Connections has two modes—warn on insecure public sites, and warn on any insecure site. Any user can enable “Always Use Secure Connections” in the Chrome Privacy and Security settings, regardless of if they’re using Advanced Protection. Users can choose if they would like to warn on any insecure site, or only insecure public sites. Enterprises can opt their fleet into either mode, and set exceptions using the HTTPSOnlyMode and HTTPAllowlist policies, respectively. Website operators should protect their users' confidentiality, ensure their content is delivered exactly as they intended, and avoid warnings, by deploying HTTPS.Full Site IsolationSite Isolation is a security feature in Chrome that isolates each website into its own rendering OS process. This means that different websites, even if loaded in a single tab of the same browser window, are kept completely separate from each other in memory. This isolation prevents a malicious website from accessing data or code from another website, even if that malicious website manages to exploit a vulnerability in Chrome’s renderer—a second bug to escape the renderer sandbox is required to access other sites. Site isolation improves security, but requires extra memory to have one process per site. Chrome Desktop isolates all sites by default. However, Android is particularly sensitive to memory usage, so for mobile Android form factors, when Advanced Protection is off, Chrome will only isolate a site if a user logs into that site, or if the user submits a form on that site. On Android devices with 4GB+ RAM in Advanced Protection (and on all desktop clients), Chrome will isolate all sites. Full Site Isolation significantly reduces the risk of cross-site data leakage for Advanced Protection users. JavaScript Optimizations and Security Advanced Protection reduces the attack surface of Chrome by disabling the higher-level optimizing Javascript compilers inside V8. V8 is Chrome’s high-performance Javascript and WebAssembly engine. The optimizing compilers in V8 make certain websites run faster, however they historically also have been a source of known exploitation of Chrome. Of all the patched security bugs in V8 with known exploitation, disabling the optimizers would have mitigated ~50%. However, the optimizers are why Chrome scores the highest on industry-wide benchmarks such as Speedometer. Disabling the optimizers blocks a large class of exploits, at the cost of causing performance issues for some websites. Javascript optimizers can be disabled outside of Advanced Protection Mode via the “Javascript optimization & security” Site Setting. The Site Setting also enables users to disable/enable Javascript optimizers on a per-site basis. Disabling these optimizing compilers is not limited to Advanced Protection. Since Chrome 133, we’ve exposed this as a Site Setting that allows users to enable or disable the higher-level optimizing compilers on a per-site basis, as well as change the default. Settings -> Privacy and Security -> Javascript optimization and security This setting can be controlled by the DefaultJavaScriptOptimizerSetting enterprise policy, alongside JavaScriptOptimizerAllowedForSites and JavaScriptOptimizerBlockedForSites for managing the allowlist and denylist. Enterprises can use this policy to block access to the optimizer, while still allowlisting1 the SaaS vendors their employees use on a daily basis. It’s available on Android and desktop platforms Chrome aims for the default configuration to be secure for all its users, and we’re continuing to raise the bar for V8 security in the default configuration by rolling out the V8 sandbox. Protecting All Users Billions of people use Chrome and Android, and not all of them have the same risk profile. Less sophisticated attacks by commodity malware can be very lucrative for attackers when done at scale, but so can sophisticated attacks on targeted users. This means that we cannot expect the security tradeoffs we make for the default configuration of Chrome to be suitable for everyone. Advanced Protection, and the security settings associated with it, are a way for users with varying risk profiles to tailor Chrome to their security needs, either as an individual at-risk user. Enterprises with a fleet of managed Chrome installations can also enable the underlying settings now. Advanced Protection is available on Android 16 in Chrome 137+. We additionally recommend at-risk users join the Advanced Protection Program with their Google accounts, which will require the account to use phishing-resistant multi-factor authentication methods and enable Advanced Protection on any of the user’s Android devices. We also recommend users enable automatic updates and always keep their Android phones and web browsers up to date. Notes Allowlisting only works on platforms capable of full site isolation—any desktop platform and Android devices with 2GB+ RAM. This is because internally allowlisting is dependent on origin isolation. ↩

Mitigating prompt injection attacks with a layered defense strategy

Friday June 13th, 2025 06:49:19 PM
Posted by Google GenAI Security TeamWith the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections. Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions. As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures.At Google, our teams have a longstanding precedent of investing in a defense-in-depth strategy, including robust evaluation, threat analysis, AI security best practices, AI red-teaming, adversarial training, and model hardening for generative AI tools. This approach enables safer adoption of Gemini in Google Workspace and the Gemini app (we refer to both in this blog as “Gemini” for simplicity). Below we describe our prompt injection mitigation product strategy based on extensive research, development, and deployment of improved security mitigations.A layered security approachGoogle has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker. This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources. Our model training with adversarial data significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models (technical details). This inherent model resilience is augmented with additional defenses that we built directly into Gemini, including: Prompt injection content classifiersSecurity thought reinforcementMarkdown sanitization and suspicious URL redactionUser confirmation frameworkEnd-user security mitigation notificationsThis layered approach to our security strategy strengthens the overall security framework for Gemini – throughout the prompt lifecycle and across diverse attack techniques.1. Prompt injection content classifiersThrough collaboration with leading AI security researchers via Google's AI Vulnerability Reward Program (VRP), we've curated one of the world’s most advanced catalogs of generative AI vulnerabilities and adversarial data. Utilizing this resource, we built and are in the process of rolling out proprietary machine learning models that can detect malicious prompts and instructions within various formats, such as emails and files, drawing from real-world examples. Consequently, when users query Workspace data with Gemini, the content classifiers filter out harmful data containing malicious instructions, helping to ensure a secure end-to-end user experience by retaining only safe content. For example, if a user receives an email in Gmail that includes malicious instructions, our content classifiers help to detect and disregard malicious instructions, then generate a safe response for the user. This is in addition to built-in defenses in Gmail that automatically block more than 99.9% of spam, phishing attempts, and malware.A diagram of Gemini’s actions based on the detection of the malicious instructions by content classifiers.2. Security thought reinforcementThis technique adds targeted security instructions surrounding the prompt content to remind the large language model (LLM) to perform the user-directed task and ignore any adversarial instructions that could be present in the content. With this approach, we steer the LLM to stay focused on the task and ignore harmful or malicious requests added by a threat actor to execute indirect prompt injection attacks.A diagram of Gemini’s actions based on additional protection provided by the security thought reinforcement technique. 3. Markdown sanitization and suspicious URL redaction Our markdown sanitizer identifies external image URLs and will not render them, making the “EchoLeak” 0-click image rendering exfiltration vulnerability not applicable to Gemini. From there, a key protection against prompt injection and data exfiltration attacks occurs at the URL level. With external data containing dynamic URLs, users may encounter unknown risks as these URLs may be designed for indirect prompt injections and data exfiltration attacks. Malicious instructions executed on a user's behalf may also generate harmful URLs. With Gemini, our defense system includes suspicious URL detection based on Google Safe Browsing to differentiate between safe and unsafe links, providing a secure experience by helping to prevent URL-based attacks. For example, if a document contains malicious URLs and a user is summarizing the content with Gemini, the suspicious URLs will be redacted in Gemini’s response. Gemini in Gmail provides a summary of an email thread. In the summary, there is an unsafe URL. That URL is redacted in the response and is replaced with the text “suspicious link removed”. 4. User confirmation frameworkGemini also features a contextual user confirmation system. This framework enables Gemini to require user confirmation for certain actions, also known as “Human-In-The-Loop” (HITL), using these responses to bolster security and streamline the user experience. For example, potentially risky operations like deleting a calendar event may trigger an explicit user confirmation request, thereby helping to prevent undetected or immediate execution of the operation.The Gemini app with instructions to delete all events on Saturday. Gemini responds with the events found on Google Calendar and asks the user to confirm this action.5. End-user security mitigation notificationsA key aspect to keeping our users safe is sharing details on attacks that we’ve stopped so users can watch out for similar attacks in the future. To that end, when security issues are mitigated with our built-in defenses, end users are provided with contextual information allowing them to learn more via dedicated help center articles. For example, if Gemini summarizes a file containing malicious instructions and one of Google’s prompt injection defenses mitigates the situation, a security notification with a “Learn more” link will be displayed for the user. Users are encouraged to become more familiar with our prompt injection defenses by reading the Help Center article. Gemini in Docs with instructions to provide a summary of a file. Suspicious content was detected and a response was not provided. There is a yellow security notification banner for the user and a statement that Gemini’s response has been removed, with a “Learn more” link to a relevant Help Center article.Moving forwardOur comprehensive prompt injection security strategy strengthens the overall security framework for Gemini. Beyond the techniques described above, it also involves rigorous testing through manual and automated red teams, generative AI security BugSWAT events, strong security standards like our Secure AI Framework (SAIF), and partnerships with both external researchers via the Google AI Vulnerability Reward Program (VRP) and industry peers via the Coalition for Secure AI (CoSAI). Our commitment to trust includes collaboration with the security community to responsibly disclose AI security vulnerabilities, share our latest threat intelligence on ways we see bad actors trying to leverage AI, and offering insights into our work to build stronger prompt injection defenses. Working closely with industry partners is crucial to building stronger protections for all of our users. To that end, we’re fortunate to have strong collaborative partnerships with numerous researchers, such as Ben Nassi (Confidentiality), Stav Cohen (Technion), and Or Yair (SafeBreach), as well as other AI Security researchers participating in our BugSWAT events and AI VRP program. We appreciate the work of these researchers and others in the community to help us red team and refine our defenses.We continue working to make upcoming Gemini models inherently more resilient and add additional prompt injection defenses directly into Gemini later this year. To learn more about Google’s progress and research on generative AI threat actors, attack techniques, and vulnerabilities, take a look at the following resources:Beyond Speculation: Data-Driven Insights into AI and Cybersecurity (RSAC 2025 conference keynote) from Google’s Threat Intelligence Group (GTIG)Adversarial Misuse of Generative AI (blog post) from Google’s Threat Intelligence Group (GTIG)Google's Approach for Secure AI Agents (white paper) from Google’s Secure AI Framework (SAIF) teamAdvancing Gemini's security safeguards (blog post) from Google’s DeepMind teamLessons from Defending Gemini Against Indirect Prompt Injections (white paper) from Google’s DeepMind team

Sustaining Digital Certificate Security - Upcoming Changes to the Chrome Root Store

Friday May 30th, 2025 03:02:16 PM
Posted by Chrome Root Program, Chrome Security Team Note: Google Chrome communicated its removal of default trust of Chunghwa Telecom and Netlock in the public forum on May 30, 2025. The Chrome Root Program Policy states that Certification Authority (CA) certificates included in the Chrome Root Store must provide value to Chrome end users that exceeds the risk of their continued inclusion. It also describes many of the factors we consider significant when CA Owners disclose and respond to incidents. When things don’t go right, we expect CA Owners to commit to meaningful and demonstrable change resulting in evidenced continuous improvement. Chrome's confidence in the reliability of Chunghwa Telecom and Netlock as CA Owners included in the Chrome Root Store has diminished due to patterns of concerning behavior observed over the past year. These patterns represent a loss of integrity and fall short of expectations, eroding trust in these CA Owners as publicly-trusted certificate issuers trusted by default in Chrome. To safeguard Chrome’s users, and preserve the integrity of the Chrome Root Store, we are taking the following action. Upcoming change in Chrome 139 and higher: Transport Layer Security (TLS) server authentication certificates validating to the following root CA certificates whose earliest Signed Certificate Timestamp (SCT) is dated after July 31, 2025 11:59:59 PM UTC, will no longer be trusted by default. OU=ePKI Root Certification Authority,O=Chunghwa Telecom Co., Ltd.,C=TW CN=HiPKI Root CA - G1,O=Chunghwa Telecom Co., Ltd.,C=TW CN=NetLock Arany (Class Gold) Főtanúsítvány,OU=Tanúsítványkiadók (Certification Services),O=NetLock Kft.,L=Budapest,C=HU TLS server authentication certificates validating to the above set of roots whose earliest SCT is on or before July 31, 2025 11:59:59 PM UTC, will be unaffected by this change. This approach attempts to minimize disruption to existing subscribers using a previously announced Chrome feature to remove default trust based on the SCTs in certificates. Additionally, should a Chrome user or enterprise explicitly trust any of the above certificates on a platform and version of Chrome relying on the Chrome Root Store (e.g., explicit trust is conveyed through a Group Policy Object on Windows), the SCT-based constraints described above will be overridden and certificates will function as they do today. To further minimize risk of disruption, website operators are encouraged to review the “Frequently Asked Questions" listed below. Why is Chrome taking action? CAs serve a privileged and trusted role on the internet that underpin encrypted connections between browsers and websites. With this tremendous responsibility comes an expectation of adhering to reasonable and consensus-driven security and compliance expectations, including those defined by the CA/Browser Forum TLS Baseline Requirements. Over the past several months and years, we have observed a pattern of compliance failures, unmet improvement commitments, and the absence of tangible, measurable progress in response to publicly disclosed incident reports. When these factors are considered in aggregate and considered against the inherent risk each publicly-trusted CA poses to the internet, continued public trust is no longer justified. When will this action happen? The action of Chrome, by default, no longer trusting new TLS certificates issued by these CAs will begin on approximately August 1, 2025, affecting certificates issued at that point or later. This action will occur in Versions of Chrome 139 and greater on Windows, macOS, ChromeOS, Android, and Linux. Apple policies prevent the Chrome Certificate Verifier and corresponding Chrome Root Store from being used on Chrome for iOS. What is the user impact of this action? By default, Chrome users in the above populations who navigate to a website serving a certificate from Chunghwa Telecom or Netlock issued after July 31, 2025 will see a full page interstitial similar to this one. Certificates issued by other CAs are not impacted by this action. How can a website operator tell if their website is affected? Website operators can determine if they are affected by this action by using the Chrome Certificate Viewer. Use the Chrome Certificate Viewer Navigate to a website (e.g., https://www.google.com) Click the “Tune" icon Click “Connection is Secure" Click “Certificate is Valid" (the Chrome Certificate Viewer will open) Website owner action is not required, if the “Organization (O)” field listed beneath the “Issued By" heading does not contain “Chunghwa Telecom" , “行政院” , “NETLOCK Ltd.”, or “NETLOCK Kft.” Website owner action is required, if the “Organization (O)” field listed beneath the “Issued By" heading contains “Chunghwa Telecom" , “行政院” , “NETLOCK Ltd.”, or “NETLOCK Kft.” What does an affected website operator do? We recommend that affected website operators transition to a new publicly-trusted CA Owner as soon as reasonably possible. To avoid adverse website user impact, action must be completed before the existing certificate(s) expire if expiry is planned to take place after July 31, 2025. While website operators could delay the impact of blocking action by choosing to collect and install a new TLS certificate issued from Chunghwa Telecom or Netlock before Chrome’s blocking action begins on August 1, 2025, website operators will inevitably need to collect and install a new TLS certificate from one of the many other CAs included in the Chrome Root Store. Can I test these changes before they take effect? Yes. A command-line flag was added beginning in Chrome 128 that allows administrators and power users to simulate the effect of an SCTNotAfter distrust constraint as described in this blog post. How to: Simulate an SCTNotAfter distrust1. Close all open versions of Chrome2. Start Chrome using the following command-line flag, substituting variables described below with actual values --test-crs-constraints=$[Comma Separated List of Trust Anchor Certificate SHA256 Hashes]:sctnotafter=$[epoch_timestamp] 3. Evaluate the effects of the flag with test websites Learn more about command-line flags here. I use affected certificates for my internal enterprise network, do I need to do anything? Beginning in Chrome 127, enterprises can override Chrome Root Store constraints like those described in this blog post by installing the corresponding root CA certificate as a locally-trusted root on the platform Chrome is running (e.g., installed in the Microsoft Certificate Store as a Trusted Root CA). How do enterprises add a CA as locally-trusted? Customer organizations should use this enterprise policy or defer to platform provider guidance for trusting root CA certificates. What about other Google products? Other Google product team updates may be made available in the future.

Tracking the Cost of Quantum Factoring

Friday May 23rd, 2025 11:57:56 AM
Posted by Craig Gidney, Quantum Research Scientist, and Sophie Schmieg, Senior Staff Cryptography Engineer Google Quantum AI's mission is to build best in class quantum computing for otherwise unsolvable problems. For decades the quantum and security communities have also known that large-scale quantum computers will at some point in the future likely be able to break many of today’s secure public key cryptography algorithms, such as Rivest–Shamir–Adleman (RSA). Google has long worked with the U.S. National Institute of Standards and Technology (NIST) and others in government, industry, and academia to develop and transition to post-quantum cryptography (PQC), which is expected to be resistant to quantum computing attacks. As quantum computing technology continues to advance, ongoing multi-stakeholder collaboration and action on PQC is critical.In order to plan for the transition from today’s cryptosystems to an era of PQC, it's important the size and performance of a future quantum computer that could likely break current cryptography algorithms is carefully characterized. Yesterday, we published a preprint demonstrating that 2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019. Notably, quantum computers with relevant error rates currently have on the order of only 100 to 1000 qubits, and the National Institute of Standards and Technology (NIST) recently released standard PQC algorithms that are expected to be resistant to future large-scale quantum computers. However, this new result does underscore the importance of migrating to these standards in line with NIST recommended timelines. Estimated resources for factoring have been steadily decreasingQuantum computers break RSA by factoring numbers, using Shor’s algorithm. Since Peter Shor published this algorithm in 1994, the estimated number of qubits needed to run it has steadily decreased. For example, in 2012, it was estimated that a 2048-bit RSA key could be broken by a quantum computer with a billion physical qubits. In 2019, using the same physical assumptions – which consider qubits with a slightly lower error rate than Google Quantum AI’s current quantum computers – the estimate was lowered to 20 million physical qubits.Historical estimates of the number of physical qubits needed to factor 2048-bit RSA integers.This result represents a 20-fold decrease compared to our estimate from 2019The reduction in physical qubit count comes from two sources: better algorithms and better error correction – whereby qubits used by the algorithm ("logical qubits") are redundantly encoded across many physical qubits, so that errors can be detected and corrected.On the algorithmic side, the key change is to compute an approximate modular exponentiation rather than an exact one. An algorithm for doing this, while using only small work registers, was discovered in 2024 by Chevignard and Fouque and Schrottenloher. Their algorithm used 1000x more operations than prior work, but we found ways to reduce that overhead down to 2x.On the error correction side, the key change is tripling the storage density of idle logical qubits by adding a second layer of error correction. Normally more error correction layers means more overhead, but a good combination was discovered by the Google Quantum AI team in 2023. Another notable error correction improvement is using "magic state cultivation", proposed by the Google Quantum AI team in 2024, to reduce the workspace required for certain basic quantum operations. These error correction improvements aren't specific to factoring and also reduce the required resources for other quantum computations like in chemistry and materials simulation.Security implicationsNIST recently concluded a PQC competition that resulted in the first set of PQC standards. These algorithms can already be deployed to defend against quantum computers well before a working cryptographically relevant quantum computer is built. To assess the security implications of quantum computers, however, it’s instructive to additionally take a closer look at the affected algorithms (see here for a detailed look): RSA and Elliptic Curve Diffie-Hellman. As asymmetric algorithms, they are used for encryption in transit, including encryption for messaging services, as well as digital signatures (widely used to prove the authenticity of documents or software, e.g. the identity of websites). For asymmetric encryption, in particular encryption in transit, the motivation to migrate to PQC is made more urgent due to the fact that an adversary can collect ciphertexts, and later decrypt them once a quantum computer is available, known as a “store now, decrypt later” attack. Google has therefore been encrypting traffic both in Chrome and internally, switching to the standardized version of ML-KEM once it became available. Notably not affected is symmetric cryptography, which is primarily deployed in encryption at rest, and to enable some stateless services.For signatures, things are more complex. Some signature use cases are similarly urgent, e.g., when public keys are fixed in hardware. In general, the landscape for signatures is mostly remarkable due to the higher complexity of the transition, since signature keys are used in many different places, and since these keys tend to be longer lived than the usually ephemeral encryption keys. Signature keys are therefore harder to replace and much more attractive targets to attack, especially when compute time on a quantum computer is a limited resource. This complexity likewise motivates moving earlier rather than later. To enable this, we have added PQC signature schemes in public preview in Cloud KMS. The initial public draft of the NIST internal report on the transition to post-quantum cryptography standards states that vulnerable systems should be deprecated after 2030 and disallowed after 2035. Our work highlights the importance of adhering to this recommended timeline.More from Google on PQC: https://cloud.google.com/security/resources/post-quantum-cryptography?e=48754805 

What’s New in Android Security and Privacy in 2025

Wednesday May 14th, 2025 04:28:23 PM
Posted by Dave Kleidermacher, VP Engineering, Android Security and Privacy Android’s intelligent protections keep you safe from everyday dangers. Our dedication to your security is validated by security experts, who consistently rank top Android devices highest in security, and score Android smartphones, led by the Pixel 9 Pro, as leaders in anti-fraud efficacy.Android is always developing new protections to keep you, your device, and your data safe. Today, we’re announcing new features and enhancements that build on our industry-leading protections to help keep you safe from scams, fraud, and theft on Android. Smarter protections against phone call scams Our research shows that phone scammers often try to trick people into performing specific actions to initiate a scam, like changing default device security settings or granting elevated permissions to an app. These actions can result in spying, fraud, and other abuse by giving an attacker deeper access to your device and data. To combat phone scammers, we’re working to block specific actions and warn you of these sophisticated attempts. This happens completely on device and is applied only with conversations with non-contacts. Android’s new in-call protections1 provide an additional layer of defense, preventing you from taking risky security actions during a call like: Disabling Google Play Protect, Android’s built-in security protection, that is on by default and continuously scans for malicious app behavior, no matter the download source. Sideloading an app for the first time from a web browser, messaging app or other source – which may not have been vetted for security and privacy by Google. Granting accessibility permissions, which can give a newly downloaded malicious app access to gain control over the user's device and steal sensitive/private data, like banking information. And if you’re screen sharing during a phone call, Android will now automatically prompt you to stop sharing at the end of a call. These protections help safeguard you against scammers that attempt to gain access to sensitive information to conduct fraud. Piloting enhanced in-call protection for banking appsScreen sharing scams are becoming quite common, with fraudsters often impersonating banks, government agencies, and other trusted institutions – using screen sharing to guide users to perform costly actions such as mobile banking transfers. To better protect you from these attacks, we’re piloting new in-call protections for banking apps, starting in the UK. When you launch a participating banking app while screen sharing with an unknown contact, your Android device will warn you about the potential dangers and give you the option to end the call and to stop screen sharing with one tap. This feature will be enabled automatically for participating banking apps whenever you're on a phone call with an unknown contact on Android 11+ devices. We are working with UK banks Monzo, NatWest and Revolut to pilot this feature for their customers in the coming weeks and will assess the results of the pilot ahead of a wider roll out. Making real-time Scam Detection in Google Messages even more intelligent We recently launched AI-powered Scam Detection in Google Messages and Phone by Google to protect you from conversational scams that might sound innocent at first, but turn malicious and can lead to financial loss or data theft. When Scam Detection discovers a suspicious conversation pattern, it warns you in real-time so you can react before falling victim to a costly scam. AI-powered Scam Detection is always improving to help keep you safe while also keeping your privacy in mind. With Google’s advanced on-device AI, your conversations stay private to you. All message processing remains on-device and you’re always in control. You can turn off Spam Protection, which includes Scam Detection, in your Google Messages at any time. Prior to targeting conversational scams, Scam Detection in Google Messages focused on analyzing and detecting package delivery and job seeking scams. We’ve now expanded our detections to help protect you from a wider variety of sophisticated scams including: Toll road and other billing fee scams Crypto scams Financial impersonation scams Gift card and prize scams Technical support scams And more These enhancements apply to all Google Messages users. Fighting fraud and impersonation with Key Verifier To help protect you from scammers who try to impersonate someone you know, we’re launching a helpful tool called Key Verifier. The feature allows you and the person you’re messaging to verify the identity of the other party through public encryption keys, protecting your end-to-end encrypted messages in Google Messages. By verifying contact keys in your Google Contacts app (through a QR code scanning or number comparison), you can have an extra layer of assurance that the person on the other end is genuine and that your conversation is private with them. Key Verifier provides a visual way for you and your contact to quickly confirm that your secret keys match, strengthening your confidence that you’re communicating with the intended recipient and not a scammer. For example, if an attacker gains access to a friend’s phone number and uses it on another device to send you a message – which can happen as a result of a SIM swap attack – their contact's verification status will be marked as no longer verified in the Google Contacts app, suggesting your friend’s account may be compromised or has been changed. Key Verifier will launch later this summer in Google Messages on Android 10+ devices. Comprehensive mobile theft protection, now even stronger Physical device theft can lead to financial fraud and data theft, with the value of your banking and payment information many times exceeding the value of your phone. This is one of the reasons why last year we launched the mobile industry’s most comprehensive suite of theft protection features to protect you before, during, and after a theft. Since launch, our theft protection features have helped protect data on hundreds of thousands of devices that may have fallen into the wrong hands. This includes devices that were locked by Remote Lock or Theft Detection Lock and remained locked for over 48 hours. Most recently, we launched Identity Check for Pixel and Samsung One UI 7 devices, providing an extra layer of security even if your PIN or password is compromised. This protection will also now be available from more device manufacturers on supported devices that upgrade to Android 16. Coming later this year, we’re further hardening Factory Reset protections, which will restrict all functionalities on devices that are reset without the owner’s authorization. You'll also gain more control over our Remote Lock feature with the addition of a security challenge question, helping to prevent unauthorized actions. We’re also enhancing your security against thieves in Android 16 by providing more protection for one-time passwords that are received when your phone is locked. In higher risk scenarios2, Android will hide one-time passwords on your lock screen, ensuring that only you can see them after unlocking your device. Advanced Protection: Google’s strongest security for mobile devices Protecting users who need heightened security has been a long-standing commitment at Google, which is why we have our Advanced Protection Program that provides Google’s strongest protections against targeted attacks.To enhance these existing device defenses, Android 16 extends Advanced Protection with a device-level security setting for Android users. Whether you’re an at-risk individual – such as a journalist, elected official, or public figure – or you just prioritize security, Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re protected against the most sophisticated threats. Advanced Protection is available on devices with Android 16. Learn more in our blog. More intelligent defenses against bad apps with Google Play Protect One way malicious developers try to trick people is by hiding or changing their app icon, making unsafe apps more difficult to find and remove. Now, Google Play Protect live threat detection will catch apps and alert you when we detect this deceptive behavior. This feature will be available to Google Pixel 6+ and a selection of new devices from other manufacturers in the coming months. Google Play Protect always checks each app before it gets installed on your device, regardless of the install source. It conducts real-time scanning of an app, enhanced by on-device machine learning, when users try to install an app that has never been seen by Google Play Protect to help detect emerging threats. We’ve made Google Play Protect’s on-device capabilities smarter to help us identify more malicious applications even faster to keep you safe. Google Play Protect now uses a new set of on-device rules to specifically look for text or binary patterns to quickly identify malware families. If an app shows these malicious patterns, we can alert you before you even install it. And to keep you safe from new and emerging malware and their variants, we will update these rules frequently for better classification over time. This update to Google Play Protect is now available globally for all Android users with Google Play services. Always advancing Android security In addition to new features that come in numbered Android releases, we're constantly enhancing your protection on Android through seamless Google Play services updates and other improvements, ensuring you benefit from the latest security advancements continuously. This allows us to rapidly deploy critical defenses and keep you ahead of emerging threats, making your Android experience safer every day.Through close collaboration with our partners across the Android ecosystem and the broader security community, we remain focused on bringing you security enhancements and innovative new features to help keep you safe. Notes In-call protection for disabling Google Play Protect is available on Android 6+ devices. Protections for sideloading an app and turning on accessibility permissions are available on Android 16 devices. ↩ When a user’s device is not connected to Wi-Fi and has not been recently unlocked ↩

Advanced Protection: Google’s Strongest Security for Mobile Devices

Tuesday May 13th, 2025 06:04:26 PM
Posted by Il-Sung Lee, Group Product Manager, Android Security Protecting users who need heightened security has been a long-standing commitment at Google, which is why we have our Advanced Protection Program that provides Google’s strongest protections against targeted attacks.To enhance these existing device defenses, Android 16 extends Advanced Protection with a device-level security setting for Android users. Whether you’re an at-risk individual – such as a journalist, elected official, or public figure – or you just prioritize security, Advanced Protection gives you the ability to activate Google’s strongest security for mobile devices, providing greater peace of mind that you’re protected against the most sophisticated threats. Simple to activate, powerful in protectionAdvanced Protection ensures all of Android's highest security features are enabled and are seamlessly working together to safeguard you against online attacks, harmful apps, and data risks. Advanced Protection activates a powerful array of security features, combining new capabilities with pre-existing ones that have earned top ratings in security comparisons, all designed to protect your device across several critical areas.We're also introducing innovative, Android-specific features, such as Intrusion Logging. This industry-first feature securely backs up device logs in a privacy-preserving and tamper-resistant way, accessible only to the user. These logs enable a forensic analysis if a device compromise is ever suspected. Advanced Protection gives users: Best-in-class protection, minimal disruption: Advanced Protection gives users the option to equip their devices with Android’s most effective security features for proactive defense, with a user-friendly and low-friction experience. Easy activation: Advanced Protection makes security easy and accessible. You don’t need to be a security expert to benefit from enhanced security. Defense-in-depth: Once a user turns on Advanced Protection, the system prevents accidental or malicious disablement of the individual security features under the Advanced Protection umbrella. This reflects a "defense-in-depth" strategy, where multiple security layers work together. Seamless security integration with apps: Advanced Protection acts as a single control point that enables important security settings across many of your favorite Google apps, including Chrome, Google Message, and Phone by Google. Advanced Protection will also incorporate third-party applications that choose to integrate in the future. How your Android device becomes fortified with Advanced Protection Advanced Protection manages the following existing and new security features for your device, ensuring they are activated and cannot be disabled across critical protection areas: Continuously evolving Advanced ProtectionWith the release of Android 16, users who choose to activate Advanced Protection will gain immediate access to a core suite of enhanced security features. Additional Advanced Protection features like Intrusion Logging, USB protection, the option to disable auto-reconnect to insecure networks, and integration with Scam Detection for Phone by Google will become available later this year. We are committed to continuously expanding the security and privacy capabilities within Advanced Protection, so users can benefit from the best of Android’s powerful security features.

Using AI to stop tech support scams in Chrome

Thursday May 8th, 2025 04:59:22 PM
Posted by Jasika Bawa, Andy Lim, and Xinghui Lu, Google Chrome Security Tech support scams are an increasingly prevalent form of cybercrime, characterized by deceptive tactics aimed at extorting money or gaining unauthorized access to sensitive data. In a tech support scam, the goal of the scammer is to trick you into believing your computer has a serious problem, such as a virus or malware infection, and then convince you to pay for unnecessary services, software, or grant them remote access to your device. Tech support scams on the web often employ alarming pop-up warnings mimicking legitimate security alerts. We've also observed them to use full-screen takeovers and disable keyboard and mouse input to create a sense of crisis. Chrome has always worked with Google Safe Browsing to help keep you safe online. Now, with this week's launch of Chrome 137, Chrome will offer an additional layer of protection using the on-device Gemini Nano large language model (LLM). This new feature will leverage the LLM to generate signals that will be used by Safe Browsing in order to deliver higher confidence verdicts about potentially dangerous sites like tech support scams. Initial research using LLMs has shown that they are relatively effective at understanding and classifying the varied, complex nature of websites. As such, we believe we can leverage LLMs to help detect scams at scale and adapt to new tactics more quickly. But why on-device? Leveraging LLMs on-device allows us to see threats when users see them. We’ve found that the average malicious site exists for less than 10 minutes, so on-device protection allows us to detect and block attacks that haven't been crawled before. The on-device approach also empowers us to see threats the way users see them. Sites can render themselves differently for different users, often for legitimate purposes (e.g. to account for device differences, offer personalization, provide time-sensitive content), but sometimes for illegitimate purposes (e.g. to evade security crawlers) – as such, having visibility into how sites are presenting themselves to real users enhances our ability to assess the web. How it works At a high level, here's how this new layer of protection works. Overview of how on-device LLM assistance in mitigating scams works When a user navigates to a potentially dangerous page, specific triggers that are characteristic of tech support scams (for example, the use of the keyboard lock API) will cause Chrome to evaluate the page using the on-device Gemini Nano LLM. Chrome provides the LLM with the contents of the page that the user is on and queries it to extract security signals, such as the intent of the page. This information is then sent to Safe Browsing for a final verdict. If Safe Browsing determines that the page is likely to be a scam based on the LLM output it receives from the client, in addition to other intelligence and metadata about the site, Chrome will show a warning interstitial. This is all done in a way that preserves performance and privacy. In addition to ensuring that the LLM is only triggered sparingly and run locally on the device, we carefully manage resource consumption by considering the number of tokens used, running the process asynchronously to avoid interrupting browser activity, and implementing throttling and quota enforcement mechanisms to limit GPU usage. LLM-summarized security signals are only sent to Safe Browsing for users who have opted-in to the Enhanced Protection mode of Safe Browsing in Chrome, giving them protection against threats Google may not have seen before. Standard Protection users will also benefit indirectly from this feature as we add newly discovered dangerous sites to blocklists. Future considerations The scam landscape continues to evolve, with bad actors constantly adapting their tactics. Beyond tech support scams, in the future we plan to use the capabilities described in this post to help detect other popular scam types, such as package tracking scams and unpaid toll scams. We also plan to utilize the growing power of Gemini to extract additional signals from website content, which will further enhance our detection capabilities. To protect even more users from scams, we are working on rolling out this feature to Chrome on Android later this year. And finally, we are collaborating with our research counterparts to explore solutions to potential exploits such as prompt injection in content and timing bypass.

Google announces Sec-Gemini v1, a new experimental cybersecurity model

Friday April 4th, 2025 07:54:57 PM
Posted by Elie Burzstein and Marianna Tishchenko, Sec-Gemini teamToday, we’re announcing Sec-Gemini v1, a new experimental AI model focused on advancing cybersecurity AI frontiers. As outlined a year ago, defenders face the daunting task of securing against all cyber threats, while attackers need to successfully find and exploit only a single vulnerability. This fundamental asymmetry has made securing systems extremely difficult, time consuming and error prone. AI-powered cybersecurity workflows have the potential to help shift the balance back to the defenders by force multiplying cybersecurity professionals like never before. Effectively powering SecOps workflows requires state-of-the-art reasoning capabilities and extensive current cybersecurity knowledge. Sec-Gemini v1 achieves this by combining Gemini’s advanced capabilities with near real-time cybersecurity knowledge and tooling. This combination allows it to achieve superior performance on key cybersecurity workflows, including incident root cause analysis, threat analysis, and vulnerability impact understanding.We firmly believe that successfully pushing AI cybersecurity frontiers to decisively tilt the balance in favor of the defenders requires a strong collaboration across the cybersecurity community. This is why we are making Sec-Gemini v1 freely available to select organizations, institutions, professionals, and NGOs for research purposes.Sec-Gemini v1 outperforms other models on key cybersecurity benchmarks as a result of its advanced integration of Google Threat Intelligence (GTI), OSV, and other key data sources. Sec-Gemini v1 outperforms other models on CTI-MCQ, a leading threat intelligence benchmark, by at least 11% (See Figure 1). It also outperforms other models by at least 10.5% on the CTI-Root Cause Mapping benchmark (See Figure 2):Figure 1: Sec-Gemini v1 outperforms other models on the CTI-MCQ Cybersecurity Threat Intelligence benchmark.Figure 2: Sec-Gemini v1 has outperformed other models in a Cybersecurity Threat Intelligence-Root Cause Mapping (CTI-RCM) benchmark that evaluates an LLM's ability to understand the nuances of vulnerability descriptions, identify vulnerabilities underlying root causes, and accurately classify them according to the CWE taxonomy.Below is an example of the comprehensiveness of Sec-Gemini v1’s answers in response to key cybersecurity questions. First, Sec-Gemini v1 is able to determine that Salt Typhoon is a threat actor (not all models do) and provides a comprehensive description of that threat actor, thanks to its deep integration with Mandiant Threat intelligence data.Next, in response to a question about the vulnerabilities in the Salt Typhoon description, Sec-Gemini v1 outputs not only vulnerability details (thanks to its integration with OSV data, the open-source vulnerabilities database operated by Google), but also contextualizes the vulnerabilities with respect to threat actors (using Mandiant data). With Sec-Gemini v1, analysts can understand the risk and threat profile associated with specific vulnerabilities faster.If you are interested in collaborating with us on advancing the AI cybersecurity frontier, please request early access to Sec-Gemini v1 via this form.

Taming the Wild West of ML: Practical Model Signing with Sigstore

Friday April 4th, 2025 05:00:39 PM
Posted by Mihai Maruseac, Google Open Source Security Team (GOSST)In partnership with NVIDIA and HiddenLayer, as part of the Open Source Security Foundation, we are now launching the first stable version of our model signing library. Using digital signatures like those from Sigstore, we allow users to verify that the model used by the application is exactly the model that was created by the developers. In this blog post we will illustrate why this release is important from Google’s point of view.With the advent of LLMs, the ML field has entered an era of rapid evolution. We have seen remarkable progress leading to weekly launches of various applications which incorporate ML models to perform tasks ranging from customer support, software development, and even performing security critical tasks.However, this has also opened the door to a new wave of security threats. Model and data poisoning, prompt injection, prompt leaking and prompt evasion are just a few of the risks that have recently been in the news. Garnering less attention are the risks around the ML supply chain process: since models are an uninspectable collection of weights (sometimes also with arbitrary code), an attacker can tamper with them and achieve significant impact to those using the models. Users, developers, and practitioners need to examine an important question during their risk assessment process: “can I trust this model?”Since its launch, Google’s Secure AI Framework (SAIF) has created guidance and technical solutions for creating AI applications that users can trust. A first step in achieving trust in the model is to permit users to verify its integrity and provenance, to prevent tampering across all processes from training to usage, via cryptographic signing. The ML supply chainTo understand the need for the model signing project, let’s look at the way ML powered applications are developed, with an eye to where malicious tampering can occur.Applications that use advanced AI models are typically developed in at least three different stages. First, a large foundation model is trained on large datasets. Next, a separate ML team finetunes the model to make it achieve good performance on application specific tasks. Finally,  this fine-tuned model is embedded into an application.The three steps involved in building an application that uses large language models.These three stages are usually handled by different teams, and potentially even different companies, since each stage requires specialized expertise. To make models available from one stage to the next, practitioners leverage model hubs, which are repositories for storing models. Kaggle and HuggingFace are popular open source options, although internal model hubs could also be used.This separation into stages creates multiple opportunities where a malicious user (or external threat actor who has compromised the internal infrastructure) could tamper with the model. This could range from just a slight alteration of the model weights that control model behavior, to injecting architectural backdoors — completely new model behaviors and capabilities that could be triggered only on specific inputs. It is also possible to exploit the serialization format and inject arbitrary code execution in the model as saved on disk — our whitepaper on AI supply chain integrity goes into more details on how popular model serialization libraries could be exploited. The following diagram summarizes the risks across the ML supply chain for developing a single model, as discussed in the whitepaper.The supply chain diagram for building a single model, illustrating some supply chain risks (oval labels) and where model signing can defend against them (check marks)The diagram shows several places where the model could be compromised. Most of these could be prevented by signing the model during training and verifying integrity before any usage, in every step: the signature would have to be verified when the model gets uploaded to a model hub, when the model gets selected to be deployed into an application (embedded or via remote APIs) and when the model is used as an intermediary during another training run. Assuming the training infrastructure is trustworthy and not compromised, this approach guarantees that each model user can trust the model.Sigstore for ML modelsSigning models is inspired by code signing, a critical step in traditional software development. A signed binary artifact helps users identify its producer and prevents tampering after publication. The average developer, however, would not want to manage keys and rotate them on compromise.These challenges are addressed by using Sigstore, a collection of tools and services that make code signing secure and easy. By binding an OpenID Connect token to a workload or developer identity, Sigstore alleviates the need to manage or rotate long-lived secrets. Furthermore, signing is made transparent so signatures over malicious artifacts could be audited in a public transparency log, by anyone. This ensures that split-view attacks are not possible, so any user would get the exact same model. These features are why we recommend Sigstore’s signing mechanism as the default approach for signing ML models.Today the OSS community is releasing the v1.0 stable version of our model signing library as a Python package supporting Sigstore and traditional signing methods. This model signing library is specialized to handle the sheer scale of ML models (which are usually much larger than traditional software components), and handles signing models represented as a directory tree. The package provides CLI utilities so that users can sign and verify model signatures for individual models. The package can also be used as a library which we plan to incorporate directly into model hub upload flows as well as into ML frameworks.Future goalsWe can view model signing as establishing the foundation of trust in the ML ecosystem. We envision extending this approach to also include datasets and other ML-related artifacts. Then, we plan to build on top of signatures, towards fully tamper-proof metadata records, that can be read by both humans and machines. This has the potential to automate a significant fraction of the work needed to perform incident response in case of a compromise in the ML world. In an ideal world, an ML developer would not need to perform any code changes to the training code, while the framework itself would handle model signing and verification in a transparent manner.If you are interested in the future of this project, join the OpenSSF meetings attached to the project. To shape the future of building tamper-proof ML, join the Coalition for Secure AI, where we are planning to work on building the entire trust ecosystem together with the open source community. In collaboration with multiple industry partners, we are starting up a special interest group under CoSAI for defining the future of ML signing and including tamper-proof ML metadata, such as model cards and evaluation results.

New security requirements adopted by HTTPS certificate industry

Thursday March 27th, 2025 08:49:41 PM
Posted by Chrome Root Program, Chrome Security Team The Chrome Root Program launched in 2022 as part of Google’s ongoing commitment to upholding secure and reliable network connections in Chrome. We previously described how the Chrome Root Program keeps users safe, and described how the program is focused on promoting technologies and practices that strengthen the underlying security assurances provided by Transport Layer Security (TLS). Many of these initiatives are described on our forward looking, public roadmap named “Moving Forward, Together.” At a high-level, “Moving Forward, Together” is our vision of the future. It is non-normative and considered distinct from the requirements detailed in the Chrome Root Program Policy. It’s focused on themes that we feel are essential to further improving the Web PKI ecosystem going forward, complementing Chrome’s core principles of speed, security, stability, and simplicity. These themes include: Encouraging modern infrastructures and agility Focusing on simplicity Promoting automation Reducing mis-issuance Increasing accountability and ecosystem integrity Streamlining and improving domain validation practices Preparing for a "post-quantum" world Earlier this month, two “Moving Forward, Together” initiatives became required practices in the CA/Browser Forum Baseline Requirements (BRs). The CA/Browser Forum is a cross-industry group that works together to develop minimum requirements for TLS certificates. Ultimately, these new initiatives represent an improvement to the security and agility of every TLS connection relied upon by Chrome users. If you’re unfamiliar with HTTPS and certificates, see the “Introduction” of this blog post for a high-level overview. Multi-Perspective Issuance Corroboration Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value’s presence has been published by the certificate requestor. Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy (CITP) of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses. Multi-Perspective Issuance Corroboration (referred to as "MPIC") enhances existing domain control validation methods by reducing the likelihood that routing attacks can result in fraudulently issued certificates. Rather than performing domain control validation and authorization from a single geographic or routing vantage point, which an adversary could influence as demonstrated by security researchers, MPIC implementations perform the same validation from multiple geographic locations and/or Internet Service Providers. This has been observed as an effective countermeasure against ethically conducted, real-world BGP hijacks. The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations. We’d especially like to thank Henry Birge-Lee, Grace Cimaszewski, Liang Wang, Cyrill Krähenbühl, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford, and others from Princeton University for their sustained efforts in promoting meaningful web security improvements and ongoing partnership. Linting Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication. Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security. Linting improves interoperability and helps CAs reduce the risk of non-compliance with industry standards (e.g., CA/Browser Forum TLS Baseline Requirements). Non-compliance can result in certificates being "mis-issued". Detecting these issues before a certificate is in use by a site operator reduces the negative impact associated with having to correct a mis-issued certificate. There are numerous open-source linting projects in existence (e.g., certlint, pkilint, x509lint, and zlint), in addition to numerous custom linting projects maintained by members of the Web PKI ecosystem. “Meta” linters, like pkimetal, combine multiple linting tools into a single solution, offering simplicity and significant performance improvements to implementers compared to implementing multiple standalone linting solutions. Last spring, the Chrome Root Program led ecosystem-wide experiments, emphasizing the need for linting adoption due to the discovery of widespread certificate mis-issuance. We later participated in drafting CA/Browser Forum Ballot SC-075 to require adoption of certificate linting. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process. What’s next? We recently landed an updated version of the Chrome Root Program Policy that further aligns with the goals outlined in “Moving Forward, Together.” The Chrome Root Program remains committed to proactive advancement of the Web PKI. This commitment was recently realized in practice through our proposal to sunset demonstrated weak domain control validation methods permitted by the CA/Browser Forum TLS Baseline Requirements. The weak validation methods in question are now prohibited beginning July 15, 2025. It’s essential we all work together to continually improve the Web PKI, and reduce the opportunities for risk and abuse before measurable harm can be realized. We continue to value collaboration with web security professionals and the members of the CA/Browser Forum to realize a safer Internet. Looking forward, we’re excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography. We’ll have more to say about quantum-resistant PKI later this year.

Titan Security Keys now available in more countries

Wednesday March 26th, 2025 05:00:12 PM
Posted by Christiaan Brand, Group Product ManagerWe’re excited to announce that starting today, Titan Security Keys are available for purchase in more than 10 new countries:IrelandPortugalThe NetherlandsDenmarkNorwaySwedenFinlandAustraliaNew ZealandSingaporePuerto RicoThis expansion means Titan Security Keys are now available in 22 markets, including previously announced countries like Austria, Belgium, Canada, France, Germany, Italy, Japan, Spain, Switzerland, the UK, and the US.What is a Titan Security Key?A Titan Security Key is a small, physical device that you can use to verify your identity when you sign in to your Google Account. It’s like a second password that’s much harder for cybercriminals to steal.Titan Security Keys allow you to store your passkeys on a strong, purpose-built device that can help protect you against phishing and other online attacks. They’re easy to use and work with a wide range of devices and services as they’re compatible with the FIDO2 standard.How do I use a Titan Security Key?To use a Titan Security Key, you simply plug it into your computer’s USB port or tap it to your device using NFC. When you’re asked to verify your identity, you’ll just need to tap the button on the key.Where can I buy a Titan Security Key?You can buy Titan Security Keys on the Google Store.We’re committed to making our products available to as many people as possible and we hope this expansion will help more people stay safe online.

Announcing OSV-Scanner V2: Vulnerability scanner and remediation tool for open source

Monday March 17th, 2025 04:47:25 PM
Posted by Rex Pan and Xueqin Cui, Google Open Source Security TeamIn December 2022, we released the open source OSV-Scanner tool, and earlier this year, we open sourced OSV-SCALIBR. OSV-Scanner and OSV-SCALIBR, together with OSV.dev are components of an open platform for managing vulnerability metadata and enabling simple and accurate matching and remediation of known vulnerabilities. Our goal is to simplify and streamline vulnerability management for developers and security teams alike.Today, we're thrilled to announce the launch of OSV-Scanner V2.0.0, following the announcement of the beta version. This V2 release builds upon the foundation we laid with OSV-SCALIBR and adds significant new capabilities to OSV-Scanner, making it a comprehensive vulnerability scanner and remediation tool with broad support for formats and ecosystems. What’s newEnhanced Dependency Extraction with OSV-SCALIBRThis release represents the first major integration of OSV-SCALIBR features into OSV-Scanner, which is now the official command-line code and container scanning tool for the OSV-SCALIBR library. This integration also expanded our support for the kinds of dependencies we can extract from projects and containers:Source manifests and lockfiles:.NET: deps.jsonPython: uv.lockJavaScript: bun.lockHaskell: cabal.project.freeze, stack.yaml.lockArtifacts:Node modulesPython wheelsJava uber jarsGo binariesLayer and base image-aware container scanningPreviously, OSV-Scanner focused on scanning of source repositories and language package manifests and lockfiles. OSV-Scanner V2 adds support for comprehensive, layer-aware scanning for Debian, Ubuntu, and Alpine container images. OSV-Scanner can now analyze container images to provide:Layers where a package was first introducedLayer history and commandsBase images the image is based on (leveraging a new experimental API provided by deps.dev).OS/Distro the container is running onFiltering of vulnerabilities that are unlikely to impact your container imageThis layer analysis currently supports the following OSes and languages:Distro Support:Alpine OSDebianUbuntuLanguage Artifacts Support:GoJavaNodePythonInteractive HTML outputPresenting vulnerability scan information in a clear and actionable way is difficult, particularly in the context of container scanning. To address this, we built a new interactive local HTML output format. This provides more interactivity and information compared to terminal only outputs, including:Severity breakdownPackage and ID filteringVulnerability importance filteringFull vulnerability advisory entriesAnd additionally for container image scanning:Layer filteringImage layer informationBase image identificationIllustration of HTML output for container image scanningGuided remediation for Maven pom.xmlLast year we released a feature called guided remediation for npm, which streamlines vulnerability management by intelligently suggesting prioritized, targeted upgrades and offering flexible strategies. This ultimately maximizes security improvements while minimizing disruption. We have now expanded this feature to Java through support for Maven pom.xml.With guided remediation support for Maven, you can remediate vulnerabilities in both direct and transitive dependencies through direct version updates or overriding versions through dependency management.We’ve introduced a few new things for our Maven support:A new remediation strategy override.Support for reading and writing pom.xml files, including writing changes to local parent pom files. We leverage OSV-Scalibr for Maven transitive dependency extraction.A private registry can be specified to fetch Maven metadata.A new experimental subcommend to update all your dependencies in pom.xml to the latest version.We also introduced machine readable output for guided remediation that makes it easier to integrate guided remediation into your workflow.What’s next?We have exciting plans for the remainder of the year, including:Continued OSV-SCALIBR Convergence: We will continue to converge OSV-Scanner and OSV-SCALIBR to bring OSV-SCALIBR’s functionality to OSV-Scanner’s CLI interface.Expanded Ecosystem Support: We'll expand the number of ecosystems we support across all the features currently in OSV-Scanner, including more languages for guided remediation, OS advisories for container scanning, and more general lockfile support for source code scanning.Full Filesystem Accountability for Containers: Another goal of osv-scanner is to give you the ability to know and account for every single file on your container image, including sideloaded binaries downloaded from the internet.Reachability Analysis: We're working on integrating reachability analysis to provide deeper insights into the potential impact of vulnerabilities.VEX Support: We're planning to add support for Vulnerability Exchange (VEX) to facilitate better communication and collaboration around vulnerability information.Try OSV-Scanner V2You can try V2.0.0 and contribute to its ongoing development by checking out OSV-Scanner or the OSV-SCALIBR repository. We welcome your feedback and contributions as we continue to improve the platform and make vulnerability management easier for everyone.If you have any questions or if you would like to contribute, don't hesitate to reach out to us at osv-discuss@google.com, or post an issue in our issue tracker.

Vulnerability Reward Program: 2024 in Review

Tuesday March 11th, 2025 05:52:54 PM
Posted by Dirk GöhmannIn 2024, our Vulnerability Reward Program confirmed the ongoing value of engaging with the security research community to make Google and its products safer. This was evident as we awarded just shy of $12 million to over 600 researchers based in countries around the globe across all of our programs.Vulnerability Reward Program 2024 in NumbersYou can learn about who’s reporting to the Vulnerability Reward Program via our Leaderboard – and find out more about our youngest security researchers who’ve recently joined the ranks of Google bug hunters.VRP Highlights in 2024In 2024 we made a series of changes and improvements coming to our vulnerability reward programs and related initiatives:The Google VRP revamped its reward structure, bumping rewards up to a maximum of $151,515, the Mobile VRP is now offering up to $300,000 for critical vulnerabilities in top-tier apps, Cloud VRP has a top-tier award of up $151,515, and Chrome awards now peak at $250,000 (see the below section on Chrome for details).We rolled out InternetCTF – to get rewarded, discover novel code execution vulnerabilities in open source and provide Tsunami plugin patches for them.The Abuse VRP saw a 40% YoY increase in payouts – we received over 250 valid bugs targeting abuse and misuse issues in Google products, resulting in over $290,000 in rewards.To improve the payment process for rewards going to bug hunters, we introduced Bugcrowd as an additional payment option on bughunters.google.com alongside the existing standard Google payment option. We hosted two editions of bugSWAT for training, skill sharing, and, of course, some live hacking – in August, we had 16 bug hunters in attendance in Las Vegas, and in October, as part of our annual security conference ESCAL8 in Malaga, Spain, we welcomed 40 of our top researchers. Between these two events, our bug hunters were rewarded $370,000 (and plenty of swag).We doubled down on our commitment to support the next generation of security engineers by hosting four init.g workshops (Las Vegas, São Paulo, Paris, and Malaga). Follow the Google VRP channel on X to stay tuned on future events.More detailed updates on selected programs are shared in the following sections.Android and Google DevicesIn 2024, the Android and Google Devices Security Reward Program and the Google Mobile Vulnerability Reward Program, both part of the broader Google Bug Hunters program, continued their mission to fortify the Android ecosystem, achieving new heights in both impact and severity. We awarded over $3.3 million in rewards to researchers who demonstrated exceptional skill in uncovering critical vulnerabilities within Android and Google mobile applications. The above numbers mark a significant change compared to previous years. Although we saw an 8% decrease in the total number of submissions, there was a 2% increase in the number of critical and high vulnerabilities. In other words, fewer researchers are submitting fewer, but more impactful bugs, and are citing the improved security posture of the Android operating system as the central challenge. This showcases the program's sustained success in hardening Android.This year, we had a heightened focus on Android Automotive OS and WearOS, bringing actual automotive devices to multiple live hacking events and conferences. At ESCAL8, we hosted a live-hacking challenge focused on Pixel devices, resulting in over $75,000 in rewards in one weekend, and the discovery of several memory safety vulnerabilities. To facilitate learning, we launched a new Android hacking course in collaboration with external security researchers, focused on mobile app security, designed for newcomers and veterans alike. Stay tuned for more.We extend our deepest gratitude to the dedicated researchers who make the Android ecosystem safer. We're proud to work with you! Special thanks to Zinuo Han (@ele7enxxh) for their expertise in Bluetooth security, blunt (@blunt_qian) for holding the record for the most valid reports submitted to the Google Play Security Reward Program, and WANG,YONG (@ThomasKing2014) for groundbreaking research on rooting Android devices with kernel MTE enabled. We also appreciate all researchers who participated in last year's bugSWAT event in Málaga. Your contributions are invaluable! ChromeChrome did some remodeling in 2024 as we updated our reward amounts and structure to incentivize deeper research. For example, we increased our maximum reward for a single issue to $250,000 for demonstrating RCE in the browser or other non-sandboxed process, and more if done directly without requiring a renderer compromise. In 2024, UAF mitigation MiraclePtr was fully launched across all platforms, and a year after the initial launch, MiraclePtr-protected bugs are no longer being considered exploitable security bugs. In tandem, we increased the MiraclePtr Bypass Reward to $250,128. Between April and November, we also launched the first and second iterations of the V8 Sandbox Bypass Rewards as part of the progression towards the V8 sandbox, eventually becoming a security boundary in Chrome. We received 337 reports of unique, valid security bugs in Chrome during 2024, and awarded 137 Chrome VRP researchers $3.4 million in total. The highest single reward of 2024 was $100,115 and was awarded to Mickey for their report of a MiraclePtr Bypass after MiraclePtr was initially enabled across most platforms in Chrome M115 in 2023. We rounded out the year by announcing the top 20 Chrome VRP researchers for 2024, all of whom were gifted new Chrome VRP swag, featuring our new Chrome VRP mascot, Bug.Cloud VRPThe Cloud VRP launched in October as a Cloud-focused vulnerability reward program dedicated to Google Cloud products and services. As part of the launch, we also updated our product tiering and improved our reward structure to better align our reports with their impact on Google Cloud. This resulted in over 150 Google Cloud products coming under the top two reward tiers, enabling better rewards for our Cloud researchers and a more secure cloud.Since its launch, Google Cloud VRP triaged over 400 reports and filed over 200 unique security vulnerabilities for Google Cloud products and services leading to over $500,000 in researcher rewards. Our highlight last year was launching at the bugSWAT event in Málaga where we got to meet many of our amazing researchers who make our program so successful! The overwhelming positive feedback from the researcher community continues to propel us to mature Google Cloud VRP further this year. Stay tuned for some exciting announcements!Generative AIWe’re celebrating an exciting first year of AI bug bounties.  We received over 150 bug reports – over $55,000 in rewards so far – with one-in-six leading to key improvements. We also ran a bugSWAT live-hacking event targeting LLM products and received 35 reports, totaling more than $87,000 – including issues like “Hacking Google Bard - From Prompt Injection to Data Exfiltration” and “We Hacked Google A.I. for $50,000”.Keep an eye on Gen AI in 2025 as we focus on expanding scope and sharing additional ways for our researcher community to contribute. Looking Forward to 2025In 2025, we will be celebrating 15 years of VRP at Google, during which we have remained fully committed to fostering collaboration, innovation, and transparency with the security community, and will continue to do so in the future. Our goal remains to stay ahead of emerging threats, adapt to evolving technologies, and continue to strengthen the security posture of Google’s products and services. We want to send a huge thank you to our bug hunter community for helping us make Google products and platforms more safe and secure for our users around the world – and invite researchers not yet engaged with the Vulnerability Reward Program to join us in our mission to keep Google safe! Thank you to Dirk Göhmann, Amy Ressler, Eduardo Vela, Jan Keller, Krzysztof Kotowicz, Martin Straka, Michael Cote, Mike Antares, Sri Tulasiram, and Tony Mendez. Tip: Want to be informed of new developments and events around our Vulnerability Reward Program? Follow the Google VRP channel on X to stay in the loop and be sure to check out the Security Engineering blog, which covers topics ranging from VRP updates to security practices and vulnerability descriptions (30 posts in 2024)!

New AI-Powered Scam Detection Features to Help Protect You on Android

Tuesday March 4th, 2025 04:59:32 PM
Posted by Lyubov Farafonova, Product Manager, Phone by Google; Alberto Pastor Nieto, Sr. Product Manager Google Messages and RCS Spam and Abuse Google has been at the forefront of protecting users from the ever-growing threat of scams and fraud with cutting-edge technologies and security expertise for years. In 2024, scammers used increasingly sophisticated tactics and generative AI-powered tools to steal more than $1 trillion from mobile consumers globally, according to the Global Anti-Scam Alliance. And with the majority of scams now delivered through phone calls and text messages, we’ve been focused on making Android’s safeguards even more intelligent with powerful Google AI to help keep your financial information and data safe. Today, we’re launching two new industry-leading AI-powered scam detection features for calls and text messages, designed to protect users from increasingly complex and damaging scams. These features specifically target conversational scams, which can often appear initially harmless before evolving into harmful situations. To enhance our detection capabilities, we partnered with financial institutions around the world to better understand the latest advanced and most common scams their customers are facing. For example, users are experiencing more conversational text scams that begin innocently, but gradually manipulate victims into sharing sensitive data, handing over funds, or switching to other messaging apps. And more phone calling scammers are using spoofing techniques to hide their real numbers and pretend to be trusted companies. Traditional spam protections are focused on protecting users before the conversation starts, and are less effective against these latest tactics from scammers that turn dangerous mid-conversation and use social engineering techniques. To better protect users, we invested in new, intelligent AI models capable of detecting suspicious patterns and delivering real-time warnings over the course of a conversation, all while prioritizing user privacy. Scam Detection for messages We’re building on our enhancements to existing Spam Protection in Google Messages that strengthen defenses against job and delivery scams, which are continuing to roll out to users. We’re now introducing Scam Detection to detect a wider range of fraudulent activities. Scam Detection in Google Messages uses powerful Google AI to proactively address conversational scams by providing real-time detection even after initial messages are received. When the on-device AI detects a suspicious pattern in SMS, MMS, and RCS messages, users will now get a message warning of a likely scam with an option to dismiss or report and block the sender. As part of the Spam Protection setting, Scam Detection on Google Messages is on by default and only applies to conversations with non-contacts. Your privacy is protected with Scam Detection in Google Messages, with all message processing remaining on-device. Your conversations remain private to you; if you choose to report a conversation to help reduce widespread spam, only sender details and recent messages with that sender are shared with Google and carriers. You can turn off Spam Protection, which includes Scam Detection, in your Google Messages at any time. Scam Detection in Google Messages is launching in English first in the U.S., U.K. and Canada and will expand to more countries soon. Scam Detection for calls More than half of Americans reported receiving at least one scam call per day in 2024. To combat the rise of sophisticated conversational scams that deceive victims over the course of a phone call, we introduced Scam Detection late last year to U.S.-based English-speaking Phone by Google public beta users on Pixel phones. We use AI models processed on-device to analyze conversations in real-time and warn users of potential scams. If a caller, for example, tries to get you to provide payment via gift cards to complete a delivery, Scam Detection will alert you through audio and haptic notifications and display a warning on your phone that the call may be a scam. During our limited beta, we analyzed calls with Gemini Nano, Google’s built-in, on-device foundation model, on Pixel 9 devices and used smaller, robust on-device machine-learning models for Pixel 6+ users. Our testing showed that Gemini Nano outperformed other models, so as a result, we're currently expanding the availability of the beta to bring the most capable Scam Detection to all English-speaking Pixel 9+ users in the U.S. Similar to Scam Detection in messaging, we built this feature to protect your privacy by processing everything on-device. Call audio is processed ephemerally and no conversation audio or transcription is recorded, stored on the device, or sent to Google or third parties. Scam Detection in Phone by Google is off by default to give users control over this feature, as phone call audio is more ephemeral compared to messages, which are stored on devices. Scam Detection only applies to calls that could potentially be scams, and is never used during calls with your contacts. If enabled, Scam Detection will beep at the start and during the call to notify participants the feature is on. You can turn off Scam Detection at any time, during an individual call or for all future calls. According to our research and a Scam Detection beta user survey, these types of alerts have already helped people be more cautious on the phone, detect suspicious activity, and avoid falling victim to conversational scams. Keeping Android users safe with the power of Google AI We're committed to keeping Android users safe, and that means constantly evolving our defenses against increasingly sophisticated scams and fraud. Our investment in intelligent protection is having real-world impact for billions of users. Leviathan Security Group, a cybersecurity firm, conducted a funded evaluation of fraud protection features on a number of smartphones and found that Android smartphones, led by the Pixel 9 Pro, scored highest for built-in security features and anti-fraud efficacy1. With AI-powered innovations like Scam Detection in Messages and Phone by Google, we're giving you more tools to stay one step ahead of bad actors. We're constantly working with our partners across the Android ecosystem to help bring new security features to even more users. Together, we’re always working to keep you safe on Android. Notes Based on third-party research funded by Google LLC in Feb 2025 comparing the Pixel 9 Pro, iPhone 16 Pro, Samsung S24+ and Xiaomi 14 Ultra. Evaluation based on no-cost smartphone features enabled by default. Some features may not be available in all countries. ↩

Securing tomorrow's software: the need for memory safety standards

Tuesday February 25th, 2025 08:04:10 PM
Posted by Alex Rebert, Security Foundations, Ben Laurie, Research, Murali Vijayaraghavan, Research and Alex Richardson, SiliconFor decades, memory safety vulnerabilities have been at the center of various security incidents across the industry, eroding trust in technology and costing billions. Traditional approaches, like code auditing, fuzzing, and exploit mitigations – while helpful – haven't been enough to stem the tide, while incurring an increasingly high cost.In this blog post, we are calling for a fundamental shift: a collective commitment to finally eliminate this class of vulnerabilities, anchored on secure-by-design practices – not just for ourselves but for the generations that follow.The shift we are calling for is reinforced by a recent ACM article calling to standardize memory safety we took part in releasing with academic and industry partners. It's a recognition that the lack of memory safety is no longer a niche technical problem but a societal one, impacting everything from national security to personal privacy.The standardization opportunityOver the past decade, a confluence of secure-by-design advancements has matured to the point of practical, widespread deployment. This includes memory-safe languages, now including high-performance ones such as Rust, as well as safer language subsets like Safe Buffers for C++. These tools are already proving effective. In Android for example, the increasing adoption of memory-safe languages like Kotlin and Rust in new code has driven a significant reduction in vulnerabilities.Looking forward, we're also seeing exciting and promising developments in hardware. Technologies like ARM's Memory Tagging Extension (MTE) and the Capability Hardware Enhanced RISC Instructions (CHERI) architecture offer a complementary defense, particularly for existing code.While these advancements are encouraging, achieving comprehensive memory safety across the entire software industry requires more than just individual technological progress:  we need to create the right environment and accountability for their widespread adoption. Standardization is key to this. To facilitate standardization, we suggest establishing a common framework for specifying and objectively assessing memory safety assurances; doing so will lay the foundation for creating a market in which vendors are incentivized to invest in memory safety. Customers will be empowered to recognize, demand, and reward safety. This framework will provide governments and businesses with the clarity to specify memory safety requirements, driving the procurement of more secure systems. The framework we are proposing would complement existing efforts by defining specific, measurable criteria for achieving different levels of memory safety assurance across the industry. In this way, policymakers will gain the technical foundation to craft effective policy initiatives and incentives promoting memory safety. A blueprint for a memory-safe futureWe know there's more than one way of solving this problem, and we are ourselves investing in several. Importantly, our vision for achieving memory safety through standardization focuses on defining the desired outcomes rather than locking ourselves into specific technologies.To translate this vision into an effective standard, we need a framework that will:Foster innovation and support diverse approaches: The standard should focus on the security properties we want to achieve (e.g., freedom from spatial and temporal safety violations) rather than mandating specific implementation details. The framework should therefore be technology-neutral, allowing vendors to choose the best approach for their products and requirements. This encourages innovation and allows software and hardware manufacturers to adopt the best solutions as they emerge.Tailor memory safety requirements based on need: The framework should establish different levels of safety assurance, akin to SLSA levels, recognizing that different applications have different security needs and cost constraints. Similarly, we likely need distinct guidance for developing new systems and improving existing codebases. For instance, we probably do not need every single piece of code to be formally proven. This allows for tailored security, ensuring appropriate levels of memory safety for various contexts. Enable objective assessment: The framework should define clear criteria and potentially metrics for assessing memory safety and compliance with a given level of assurance. The goal would be to objectively compare the memory safety assurance of different software components or systems, much like we assess energy efficiency today. This will move us beyond subjective claims and towards objective and comparable security properties across products.Be practical and actionable: Alongside the technology-neutral framework, we need best practices for existing technologies. The framework should provide guidance on how to effectively leverage specific technologies to meet the standards. This includes answering questions such as when and to what extent unsafe code is acceptable within larger software systems, and guidelines on structuring such unsafe dependencies to support compositional reasoning about safety.Google's commitmentAt Google, we're not just advocating for standardization and a memory-safe future, we're actively working to build it.We are collaborating with industry and academic partners to develop potential standards, and our joint authorship of the recent CACM call-to-action marks an important first step in this process. In addition, as outlined in our Secure by Design whitepaper and in our memory safety strategy, we are deeply committed to building security into the foundation of our products and services.This commitment is also reflected in our internal efforts. We are prioritizing memory-safe languages, and have already seen significant reductions in vulnerabilities by adopting languages like Rust in combination with existing, wide-spread usage of Java, Kotlin, and Go where performance constraints permit. We recognize that a complete transition to those languages will take time. That's why we're also investing in techniques to improve the safety of our existing C++ codebase by design, such as deploying hardened libc++.Let's build a memory-safe future togetherThis effort isn't about picking winners or dictating solutions. It's about creating a level playing field, empowering informed decision-making, and driving a virtuous cycle of security improvement. It's about enabling a future where:Developers and vendors can confidently build more secure systems, knowing their efforts can be objectively assessed.Businesses can procure memory-safe products with assurance, reducing their risk and protecting their customers.Governments can effectively protect critical infrastructure and incentivize the adoption of secure-by-design practices.Consumers are empowered to make decisions about the services they rely on and the devices they use with confidence – knowing the security of each option was assessed against a common framework. The journey towards memory safety requires a collective commitment to standardization. We need to build a future where memory safety is not an afterthought but a foundational principle, a future where the next generation inherits a digital world that is secure by design.AcknowledgmentsWe'd like to thank our CACM article co-authors for their invaluable contributions: Robert N. M. Watson, John Baldwin, Tony Chen, David Chisnall, Jessica Clarke, Brooks Davis, Nathaniel Wesley Filardo, Brett Gutstein, Graeme Jenkinson, Christoph Kern, Alfredo Mazzinghi, Simon W. Moore, Peter G. Neumann, Hamed Okhravi, Peter Sewell, Laurence Tratt, Hugo Vincent, and Konrad Witaszczyk, as well as many others.

How we kept the Google Play & Android app ecosystems safe in 2024

Wednesday January 29th, 2025 06:39:07 PM
Posted by Bethel Otuteye and Khawaja Shams (Android Security and Privacy Team), and Ron Aquino (Play Trust and Safety) Android and Google Play comprise a vibrant ecosystem with billions of users around the globe and millions of helpful apps. Keeping this ecosystem safe for users and developers remains our top priority. However, like any flourishing ecosystem, it also attracts its share of bad actors. That’s why every year, we continue to invest in more ways to protect our community and fight bad actors, so users can trust the apps they download from Google Play and developers can build thriving businesses. Last year, those investments included AI-powered threat detection, stronger privacy policies, supercharged developer tools, new industry-wide alliances, and more. As a result, we prevented 2.36 million policy-violating apps from being published on Google Play and banned more than 158,000 bad developer accounts that attempted to publish harmful apps. But that was just the start. For more, take a look at our recent highlights from 2024: Google’s advanced AI: helping make Google Play a safer placeTo keep out bad actors, we have always used a combination of human security experts and the latest threat-detection technology. In 2024, we used Google’s advanced AI to improve our systems’ ability to proactively identify malware, enabling us to detect and block bad apps more effectively. It also helps us streamline review processes for developers with a proven track record of policy compliance. Today, over 92% of our human reviews for harmful apps are AI-assisted, allowing us to take quicker and more accurate action to help prevent harmful apps from becoming available on Google Play. That’s enabled us to stop more bad apps than ever from reaching users through the Play Store, protecting users from harmful or malicious apps before they can cause any damage. Working with developers to enhance security and privacy on Google Play To protect user privacy, we’re working with developers to reduce unnecessary access to sensitive data. In 2024, we prevented 1.3 million apps from getting excessive or unnecessary access to sensitive user data. We also required apps to be more transparent about how they handle user information by launching new developer requirements and a new “Data deletion” option for apps that support user accounts and data collection. This helps users manage their app data and understand the app’s deletion practices, making it easier for Play users to delete data collected from third-party apps. We also worked to ensure that apps use the strongest and most up-to-date privacy and security capabilities Android has to offer. Every new version of Android introduces new security and privacy features, and we encourage developers to embrace these advancements as soon as possible. As a result of partnering closely with developers, over 91% of app installs on the Google Play Store now use the latest protections of Android 13 or newer. Safeguarding apps from scams and fraud is an ongoing battle for developers. The Play Integrity API allows developers to check if their apps have been tampered with or are running in potentially compromised environments, helping them to prevent abuse like fraud, bots, cheating, and data theft. Play Integrity API and Play’s automatic protection helps developers ensure that users are using the official Play version of their app with the latest security updates. Apps using Play integrity features are seeing 80% lower usage from unverified and untrusted sources on average. We’re also constantly working to improve the safety of apps on Play at scale, such as with the Google Play SDK Index. This tool offers insights and data to help developers make more informed decisions about the safety of an SDK. Last year, in addition to adding 80 SDKs to the index, we also worked closely with SDK and app developers to address potential SDK security and privacy issues, helping to build safer and more secure apps for Google Play. Google Play’s multi-layered protections against bad apps To create a trusted experience for everyone on Google Play, we use our SAFE principles as a guide, incorporating multi-layered protections that are always evolving to help keep Google Play safe. These protections start with the developers themselves, who play a crucial role in building secure apps. We provide developers with best-in-class tools, best practices, and on-demand training resources for building safe, high-quality apps. Every app undergoes rigorous review and testing, with only approved apps allowed to appear in the Play Store. Before a user downloads an app from Play, users can explore its user reviews, ratings, and Data safety section on Google Play to help them make an informed decision. And once installed, Google Play Protect, Android’s built-in security protection, helps to shield their Android device by continuously scanning for malicious app behavior. Enhancing Google Play Protect to help keep users safe on AndroidWhile the Play Store offers best-in-class security, we know it’s not the only place users download Android apps – so it’s important that we also defend Android users from more generalized mobile threats. To do this in an open ecosystem, we’ve invested in sophisticated, real-time defenses that protect against scams, malware, and abusive apps. These intelligent security measures help to keep users, user data, and devices safe, even if apps are installed from various sources with varying levels of security. Google Play Protect automatically scans every app on Android devices with Google Play Services, no matter the download source. This built-in protection, enabled by default, provides crucial security against malware and unwanted software. Google Play Protect scans more than 200 billion apps daily and performs real-time scanning at the code-level on novel apps to combat emerging and hidden threats, like polymorphic malware. In 2024, Google Play Protect’s real-time scanning identified more than 13 million new malicious apps from outside Google Play1. Google Play Protect is always evolving to combat new threats and protect users from harmful apps that can lead to scams and fraud. Here are some of the new improvements that are now available globally on Android devices with Google Play Services: Reminder notifications in Chrome on Android to re-enable Google Play Protect: According to our research, more than 95 percent of app installations from major malware families that exploit sensitive permissions highly correlated to financial fraud came from Internet-sideloading sources like web browsers, messaging apps, or file managers. To help users stay protected when browsing the web, Chrome will now display a reminder notification to re-enable Google Play Protect if it has been turned off. Additional protection against social engineering attacks: Scammers may manipulate users into disabling Play Protect during calls to download malicious Internet-sideloaded apps. To prevent this, the Play Protect app scanning toggle is now temporarily disabled during phone or video calls. This safeguard is enabled by default during traditional phone calls as well as during voice and video calls in popular third-party apps. Automatically revoking app permissions for potentially dangerous apps: Since Android 11, we’ve taken a proactive approach to data privacy by automatically resetting permissions for apps that users haven't used in a while. This ensures apps can only access the data they truly need, and users can always grant permissions back if necessary. To further enhance security, Play Protect now automatically revokes permissions for potentially harmful apps, limiting their access to sensitive data like storage, photos, and camera. Users can restore app permissions at any time, with a confirmation step for added security. Google Play Protect’s enhanced fraud protection pilot analyzes and automatically blocks the installation of apps that may use sensitive permissions frequently abused for financial fraud when the user attempts to install the app from an Internet-sideloading source (web browsers, messaging apps, or file managers). Building on the success of our initial pilot in partnership with the Cyber Security Agency of Singapore (CSA), additional enhanced fraud protection pilots are now active in nine regions – Brazil, Hong Kong, India, Kenya, Nigeria, Philippines, South Africa, Thailand, and Vietnam. In 2024, Google Play Protect’s enhanced fraud protection pilots have shielded 10 million devices from over 36 million risky installation attempts, encompassing over 200,000 unique apps. By piloting these new protections, we can proactively combat emerging threats and refine our solutions to thwart scammers and their increasingly sophisticated fraud attempts. We look forward to continuing to partner with governments, ecosystem partners, and other stakeholders to improve user protections. App badging to help users find apps they can trust at a glance on Google Play In 2024, we introduced a new badge for government developers to help users around the world identify official government apps. Government apps are often targets of impersonation due to the highly sensitive nature of the data users provide, giving bad actors the ability to steal identities and commit financial fraud. Badging verified government apps is an important step in helping connect people with safe, high-quality, useful, and relevant experiences. We partner closely with global governments and are already exploring ways to build on this work. We also recently introduced a new badge to help Google Play users discover VPN apps that take extra steps to demonstrate their strong commitment to security. We allow developers who adhere to Play safety and security guidelines and have passed an additional independent Mobile Application Security Assessment (MASA) to display a dedicated badge in the Play Store to highlight their increased commitment to safety. Collaborating to advance app security standards In addition to our partnerships with governments, developers, and other stakeholders, we also worked with our industry peers to protect the entire app ecosystem for everyone. The App Defense Alliance, in partnership with fellow steering committee members Microsoft and Meta, recently launched the ADA Application Security Assessment (ASA) v1.0, a new standard to help developers build more secure mobile, web, and cloud applications. This standard provides clear guidance on protecting sensitive data, defending against cyberattacks, and ultimately, strengthening user trust. This marks a significant step forward in establishing industry-wide security best practices for application development. All developers are encouraged to review and comply with the new mobile security standard. You’ll see this standard in action for all carrier apps pre-installed on future Pixel phone models. Looking ahead This year, we’ll continue to protect the Android and Google Play ecosystem, building on these tools and resources in response to user and developer feedback and the changing landscape. As always, we’ll keep empowering developers to build safer apps more easily, streamline their policy experience, and protect their businesses and users from bad actors. 1 Based on Google Play Protect 2024 internal data.

How we estimate the risk from prompt injection attacks on AI systems

Wednesday January 29th, 2025 05:03:41 PM
Posted by the Agentic AI Security Team at Google DeepMindModern AI systems, like Gemini, are more capable than ever, helping retrieve data and perform actions on behalf of users. However, data from external sources present new security challenges if untrusted sources are available to execute instructions on AI systems. Attackers can take advantage of this by hiding malicious instructions in data that are likely to be retrieved by the AI system, to manipulate its behavior. This type of attack is commonly referred to as an "indirect prompt injection," a term first coined by Kai Greshake and the NVIDIA team.To mitigate the risk posed by this class of attacks, we are actively deploying defenses within our AI systems along with measurement and monitoring tools. One of these tools is a robust evaluation framework we have developed to automatically red-team an AI system’s vulnerability to indirect prompt injection attacks. We will take you through our threat model, before describing three attack techniques we have implemented in our evaluation framework.Threat model and evaluation frameworkOur threat model concentrates on an attacker using indirect prompt injection to exfiltrate sensitive information, as illustrated above. The evaluation framework tests this by creating a hypothetical scenario, in which an AI agent can send and retrieve emails on behalf of the user. The agent is presented with a fictitious conversation history in which the user references private information such as their passport or social security number. Each conversation ends with a request by the user to summarize their last email, and the retrieved email in context.The contents of this email are controlled by the attacker, who tries to manipulate the agent into sending the sensitive information in the conversation history to an attacker-controlled email address. The attack is successful if the agent executes the malicious prompt contained in the email, resulting in the unauthorized disclosure of sensitive information. The attack fails if the agent only follows user instructions and provides a simple summary of the email. Automated red-teamingCrafting successful indirect prompt injections requires an iterative process of refinement based on observed responses. To automate this process, we have developed a red-team framework consisting of several optimization-based attacks that generate prompt injections (in the example above this would be different versions of the malicious email). These optimization-based attacks are designed to be as strong as possible; weak attacks do little to inform us of the susceptibility of an AI system to indirect prompt injections.Once these prompt injections have been constructed, we measure the resulting attack success rate on a diverse set of conversation histories. Because the attacker has no prior knowledge of the conversation history, to achieve a high attack success rate the prompt injection must be capable of extracting sensitive user information contained in any potential conversation contained in the prompt, making this a harder task than eliciting generic unaligned responses from the AI system. The attacks in our framework include:Actor Critic: This attack uses an attacker-controlled model to generate suggestions for prompt injections. These are passed to the AI system under attack, which returns a probability score of a successful attack. Based on this probability, the attack model refines the prompt injection. This process repeats until the attack model converges to a successful prompt injection. Beam Search: This attack starts with a naive prompt injection directly requesting that the AI system send an email to the attacker containing the sensitive user information. If the AI system recognizes the request as suspicious and does not comply, the attack adds random tokens to the end of the prompt injection and measures the new probability of the attack succeeding. If the probability increases, these random tokens are kept, otherwise they are removed, and this process repeats until the combination of the prompt injection and random appended tokens result in a successful attack.Tree of Attacks w/ Pruning (TAP): Mehrotra et al. (2024) [3] designed an attack to generate prompts that cause an AI system to violate safety policies (such as generating hate speech). We adapt this attack, making several adjustments to target security violations. Like Actor Critic, this attack searches in the natural language space; however, we assume the attacker cannot access probability scores from the AI system under attack, only the text samples that are generated.We are actively leveraging insights gleaned from these attacks within our automated red-team framework to protect current and future versions of AI systems we develop against indirect prompt injection, providing a measurable way to track security improvements. A single silver bullet defense is not expected to solve this problem entirely. We believe the most promising path to defend against these attacks involves a combination of robust evaluation frameworks leveraging automated red-teaming methods, alongside monitoring, heuristic defenses, and standard security engineering solutions. We would like to thank Vijay Bolina, Sravanti Addepalli, Lihao Liang, and Alex Kaskasoli for their prior contributions to this work.Posted on behalf of the entire Google DeepMind Agentic AI Security team (listed in alphabetical order):Aneesh Pappu, Andreas Terzis, Chongyang Shi, Gena Gibson, Ilia Shumailov, Itay Yona, Jamie Hayes, John "Four" Flynn, Juliette Pluto, Sharon Lin, Shuang Song

Android enhances theft protection with Identity Check and expanded features

Thursday January 23rd, 2025 06:01:21 PM
Posted by Jianing Sandra Guo, Product Manager, Android, Nataliya Stanetsky, Staff Program Manager, Android Today, people around the world rely on their mobile devices to help them stay connected with friends and family, manage finances, keep track of healthcare information and more – all from their fingertips. But a stolen device in the wrong hands can expose sensitive data, leaving you vulnerable to identity theft, financial fraud and privacy breaches. This is why we recently launched Android theft protection, a comprehensive suite of features designed to protect you and your data at every stage – before, during, and after device theft. As part of our commitment to help you stay safe on Android, we’re expanding and enhancing these features to deliver even more robust protection to more users around the world. Identity Check rolling out to Pixel and Samsung One UI 7 devices We’re officially launching Identity Check, first on Pixel and Samsung Galaxy devices eligible for One UI 71, to provide better protection for your critical account and device settings. When you turn on Identity Check, your device will require explicit biometric authentication to access certain sensitive resources when you’re outside of trusted locations. Identity Check also enables enhanced protection for Google Accounts on all supported devices and additional security for Samsung Accounts on One UI 7 eligible Galaxy devices, making it much more difficult for an unauthorized attacker to take over accounts signed in on the device. As part of enabling Identity Check, you can designate one or more trusted locations. When you’re outside of these trusted places, biometric authentication will be required to access critical account and device settings, like changing your device PIN or biometrics, disabling theft protection, or accessing Passkeys. Identity Check gives you more peace of mind that your most sensitive device assets are protected against unauthorized access, even if a thief or bad actor manages to learn your device PIN. Identity Check is rolling out now to Pixel devices with Android 15 and will be available on One UI 7 eligible Galaxy devices in the coming weeks. It will roll out to supported Android devices from other manufacturers later this year. Theft Detection Lock: expanding AI-powered protection to more users One of the top theft protection features introduced last year was Theft Detection Lock, which uses an on-device AI-powered algorithm to help detect when your phone may be forcibly taken from you. If the machine learning algorithm detects a potential theft attempt on your unlocked device, it locks your screen to keep thieves out. Theft Detection Lock is now fully rolled out to Android 10+ phones2 around the world. Protecting your Android device from theft We're collaborating with the GSMA and industry experts to combat mobile device theft by sharing information, tools and prevention techniques. Stay tuned for an upcoming GSMA white paper, developed in partnership with the mobile industry, with more information on protecting yourself and your organization from device theft. With the addition of Identity Check and the ongoing enhancements to our existing features, Android offers a robust and comprehensive set of tools to protect your devices and your data from theft. We’re dedicated to providing you with peace of mind, knowing your personal information is safe and secure. You can turn on the new Android theft features by clicking here on a supported Android device. Learn more about our theft protection features by visiting our help center. Notes Timing, availability and feature names may vary in One UI 7. ↩ With the exclusion for Android Go smartphones ↩


Failed to get content from 'http://Blogs.rsa.com/feed/'
Failed to get content from 'http://malware.dontneedcoffee.com/feeds/posts/default'
Sorry, the http://malwaremustdie.Blogspot.com/feeds/posts/default feed is not available at this time.
Failed to get content from 'http://isc.sans.org/rssfeed.xml'
Failed to get content from 'http://pandalabs.pandasecurity.com/rss.aspx'
Failed to get content from 'https://www.schneier.com/blog/atom.xml'
Sorry, the http://blog.fortinet.com/feed/ feed is not available at this time.
Sorry, the http://erratasec.Blogspot.com/feeds/posts/default feed is not available at this time.




Feed aggregation powered by Syndicate Press.
Processed request in 25.14519 seconds.

convert this post to pdf.
Be Sociable, Share!

Ad