Interested in Contributing? Read this
Google users saw a well-disguised phishing attack earlier this week, in form of an email purporting to be a request to share a Google Docs document.
The link redirects to a familiar OAuth request asking for access to the user’s credentials. If the user clicks on the prompt to give the site permission to use Google credentials, the phish harvests all the contacts in the victim’s Gmail address book and sends them copies of the phishing mail.
The attack uses the OAuth authentication interface, which is also used by many Web services to allow users to log in without using a password. By abusing OAuth, the attack is able to present a legitimate Google dialogue box requesting authorization. However, the authentication further asks permission for access to “view and manage your e-mail” and “view and manage the files in your Google Drive.”
Google was quick to react to the situation. They shut down the OAuth request and redirected users to an error page. Google also auto-revoked the permissions from user’s account that were affected.
Google issued a statement on the phishing attempt, saying:
We have taken action to protect users against an e-mail impersonating Google Docs & have disabled offending accounts. We’ve removed the fake pages [and] pushed updates through Safe Browsing, and our abuse team is working to prevent this kind of spoofing from happening again. We encourage users to report phishing e-mails in Gmail.
Turns out, this attack exploited a threat that had long been predicted by at least three security researchers, one of them as early as October 2011. It is possible that the party behind the attack may have copied the technique from a proof of concept posted by one security researcher to GitHub in February.
Security researcher Greg Carson said that he discovered the issue while his company was migrating users to Google Apps for Enterprise users. He posted his findings on his blog, aiming to bring to light its severity. Carson said that he initially believed the phishing attack essentially just a copy of the Google Apps Script code from his GitHub page, but on further analysis, revoked his statement. However, the modus operandi of the attack was just as Carson had reasoned in his post.
Carson had also warned of other possible attacks based on malicious Web applications leveraging Google’s authentication system, based on the OAuth 2. He believes there’s a chance future malicious campaigns could involve an attacker using keyword searches to identify and harvest sensitive documents off of a Google Drive account.
The potential for an OAuth-based attack on Google users was first discussed on an Internet Engineering Task Force OAuth mailing list by researcher Andre DeMarre in October of 2011.
Imagine someone registers a client application with an OAuth service, let’s call it Foobar, and he names his client app “Google, Inc.” The Foobar authorization server will engage the user with “Google, Inc. is requesting permission to do the following.” The resource owner might reason, “I see that I’m legitimately on the https://www.foobar.com site, and Foobar is telling me that Google wants permission. I trust Foobar and Google, so I’ll click Allow.” To make the masquerade act even more convincing, many of the most popular OAuth services allow app developers to upload images which could be official logos of the organizations they are posing as. Often app developers can supply arbitrary, unconfirmed URIs which are shown to the resource owner as the app’s website, even if the domain does not match the redirect URI. Some OAuth services blindly entrust client apps to customize the authorization page in other ways.
Meanwhile, web developer Andrew Cantino found that during the authentication process, “Google in no way makes it clear that this app was created by a 3rd party and is not affiliated with Google.” He had built a proof-of-concept Web application called “Google Security Updater” to test the issue. While Cantino’s Apps Script code only added a new label to Gmail messages, he fears:
But it could have deleted data, e-mailed a link to the script to everyone in the user’s contact list, manipulated personal information, or stolen data and sent it to a 3rd party.
Organizations can try to stop future OAuth-based attacks like these by being vigilant. The use of “look-alike” host names using non-standard top-level domains (TLDs) such as .pro, .win, and .download could potentially be blocked by intrusion prevention systems or DNS “greylisting” or spotted early by monitoring DNS traffic.
Carson suggested organizations use cloud access security brokers (CASBs), platforms that run within an organization’s network and check cloud applications’ permission requests against established policies. A CASB would have blocked access to a fake “Google Docs” application, for example.
Till Google reinforces Google services, the only solution for users seems to be a close examination of the OAuth permission requests to know if they’re legit. Right now, the OAuth interface only relies on information fed to it by developers. “I’d like to see Google provide better warnings to users when loading applications for Google services and provide Stricter controls for admins around app script authorization,” said Carson.
- Top 10 Cryptography Terminology Everybody Should Know
- Top 13+ Best Wireless Penetration Testing Tools
- A Beginners Almost Complete Guide to Social Engineering
- Deep Definition of a Computer Virus By Cohen – Part 1
- No More Ransom: New Free Decryption Tools Available
- Expert Talks: Q & A with Malware Analyst, Karsten Hahn