May 01

Google Play is officially launching its own version of privacy-related “nutrition labels” for apps. They are introducing a new section in Play Store listings that requires developers to describe the data their app gathers and how they will use it. This will be like a privacy label giving users crucial information at a glance, which should be enough to help them decide if they would like to proceed with the installation.

According to Google, users want to understand the purpose of the apps to collect their data and whether the developer is sharing it with third parties, as well as how developers protect that data. The new section is titled ‘Data safety’ and will appear alongside the usual sections like ‘About this app’ and ‘Ratings and reviews’. It will function as a privacy label, providing users with key information shortly, which should be enough to let them decide whether or not to install the app. Not only will developers declare what data they collect, but also what data they share with third parties, essentially disclosing the purpose behind the collection. Google showed off its version last year but is only beginning to roll it out this week. According to them, it will begin to roll out the new Google Play Data safety section to users on a gradual basis so that Android users won’t see this new section immediately but over the next couple of weeks.

Suzanne Frey, Vice President of Product, Android Security, and Privacy at Google said in a blog post, “You alone are responsible for making complete and accurate declarations in your app’s store listing on Google Play,”

Failure to do so would lead to policy violations, leading to the suspension of the app in the Google Play Store.

“Google Play reviews apps across all policy requirements; however, we cannot make determinations on behalf of the developers of how they handle user data. Only you possess all the information required to complete the Data safety form”. “When Google becomes aware of a discrepancy between your app behavior and your declaration, we may take appropriate action, including enforcement action,” explained Frey.

If the user wants to learn more about a particular entry, tapping on the corresponding item will collapse the menu to reveal more information on what is collected or shared. The third pillar of the Data Safety section will be the app’s security practices, which describe the security mechanisms employed to protect the collected data, like the MASVS standard. This third section also clarifies whether users are given the option to ask for the deletion of their data at any time. And finally, Data Safety will specify if the app follows Google Play Families Policy, which is geared toward children’s protection.

Developers can begin declaring how collected data is used starting today, with the deadline to complete their submissions being July 20th, 2022.

Google told that developers would provide this information themselves, which Google will not confirm. However, if it is discovered that a developer has misrepresented their data use disclosures, they will be required to fix the provided information. Google is essentially relying on developers, to be honest in the information they provide to users in this section.

While Google’s move is beneficial to Android users, a similar feature called ‘Privacy Nutrition Labels’ was already introduced by Apple in 2020. Apple also released a similar feature as part of numerous privacy-enhancing features in last year’s iOS 14. This is another case where competition in the mobile OS space has brought positive developments, giving users more insight and control over how their data are handled by the various software that runs on their smartphones.

While both sets of labels focus on informing users about how apps collect and manage data and user privacy, there are some key differences. Apple’s labels largely focus on what data is being collected, including data used for tracking purposes, and on informing the user what’s linked to them. Google’s labels, meanwhile, put a bigger focus on whether you can trust the data that’s collected is being handled responsibly by allowing developers to disclose if they follow best practices around data security. The labels also give Android developers a way to make their case as to why they collect the data directly on the label, so users can understand how the data is used — for app functionality, personalization, etc. — to help inform the user’s decision to download the app. They can also see if the data collection is required or optional.

Up until now, Android apps on the Play Store had to list a link to their Privacy Policy under the “Additional Information” section and provide a contact email. Since this privacy policy is hosted on an external location, it’s subject to modifications, might be vague, may not disclose all the crucial details about data collection and protection, and may even lead to a broken link.

This Data Safety gives users a clear understanding of what happens with their data without requiring them to spend time digging into sections, while it also empowers Google with enforcement. Moreover, since reading large texts of legal jargon isn’t exactly what users look forward to when browsing the Google Play Store for new apps, almost nobody checks these.

App privacy labels have already been accused of being an unreliable source of information following their launch on the App Store. According to a report by The Washington Post last year, many of the labels they reviewed in a spot-check provided false information. For instance, apps claiming they collected no data were actually found to be doing the opposite — collecting it and sharing it. Google is confident about the step it is taking and also claims to be honest about this Data Safety feature they are bringing.

With a large number of scam apps, malware, and usury apps found on Google Play, this new Data Safety section will not only be useful for Android users but also allow Google to find policy violators more quickly. For more information on the new system, what it includes, and how it works, check out Google’s support page.

Tags: , ,

May 21

In January 2011, well-known computer hacker George “GeoHot” Hotz discovered and published the keys to the Sony PlayStation 3 game console.  GeoHot had previously cracked the iPhone, allowing users to “jailbreak” their phone and run any software they want.

Crack Goes the PS3

Around the same time, another hacker group fail0verflow had also cracked the PS3 and released tools that enabled users to install the Linux operating system on the PS3.  The capability to turn the PS3 into a regular Linux computer was a favorite among geeks and hackers.  Sony originally provided this feature, but later angered the hacker community when it turned off the feature in 2010.

GeoHot took it to the next level and released the PS3’s “root key.”  This key authorizes hackers to run essentially any software on the PS3.  And a root key is nearly impossible to change without breaking all existing PS3 software.  Hence, GeoHot permanently and publicly cracked the PS3 platform.

Continue reading »

May 11

Imagine you are at a cocktail party.  You are having a private conversation with someone you thought was a trusted business associate.  You lean forward and whisper confidential information in his ear.  He immediately repeats what you said aloud.

Your secret may not be exposed – depending on whether anyone is within earshot – but this person has violated your trust.  You are unlikely to share any more secrets with him.

This is what it’s like when a website or online store emails your password in plaintext.  The vendor has violated your trust and called into question whether you should continue to do business with them.

Continue reading »

Jun 04

It’s a standard movie cliché: A hacker pounds away on his keyboard for 30 seconds to break a military-grade encryption scheme.  Nevermind that in real life it would take 8.4 million CPU years to factorize a 1024-bit number in software.  (Although the days of total security with 1024-bit RSA are coming to an end.)

SMBC, embedded with permission

Apr 21

During a recent security audit, a company discovered that a
blonde employee was using the following password:

“MickeyMinniePlutoHueyLouieDeweyDonaldGoofySacramento”

When the company asked the blonde why she had such a long password, she said the login screen required the password to be at least 8 characters long and include at least one capital.

From Politically Incorrect Humor

May 21

Want to snoop on your friends’ porn viewing habits?  Then follow these simple steps:

Step 1.  Copy and paste some code into a widget on your website or blog.

Step 2.  Send you friends to the webpage where you put the widget.  Their porn history will be captured in the widget.

Step 3.  See what porn sites your friends have been visiting by looking at the widget you put on your website.

How does this work?  The widget takes advantage of a security leak in the web style sheets (CSS).  Your web browser displays links you have visited in a different color.  The code mentioned above displays a list of porn sites and detects which sites have been visited based on the link color.  The best/worst part of this trick is that will likely never be fixed because it is a fundamental feature of the Web browser.

We installed this on one of our blogs, and it failed to catch any of the porn sites that we’ve visited.  I guess ProgrammersLoveMeganFox.com isn’t considered porn.

I Caught You Watching Porn

Apr 07

From xkcd: A webcomic of romance, sarcasm, math, and language

More funny stuff

Mar 20

Presenters at the CanSecWest security conference detailed how to sniff data by analyzing keystroke vibrations using a laser pointed at a laptop computer, or through electrical signals coming from a PS/2 keyboard on a PC plugged into an electrical socket.

Using about $80 worth of equipment, researchers pointed a laser on the reflective surface of a laptop between 50 feet and 100 feet away and were able to determine what letters were typed.  Line-of-sight is required, but it works through a glass window.  Using an infrared laser would prevent the victim from discovering they are under surveillance.

In the second attack method, researchers were able to determine keystrokes on a PS/2 keyboard through a ground line from a power plug in an outlet 50 feet away.  They used a digital oscilloscope and analog-digital converter, as well as filtering technology to isolate the keystroke pulses from other power line noise.

Story at CNET

Mar 11

For decades we’ve been told by security software vendors that to truly delete data from a hard drive, you have to overwrite the data multiple times with different patterns of 0s and 1s.  But now we can file this away with other computer urban legends.

Computer forensics expert Craig Wright and his colleagues ran a scientific study that overwrites hard drive data and then examines the magnetic surfaces with a microscope.  They published their results in Lecture Notes in Computer Science as Overwriting Hard Drive Data: The Great Wiping Controversy.

The study concludes that after a single overwrite of hard drive data, the likelihood of being able to reconstruct a single byte is 0.97 percent.  The odds of recovering multiple sequential bytes of data (such as a password or document) are significantly less and would require exact knowledge of where on the hard drive the sensitive data is located.

This means data-wiping software that overwrites data up to 35 times may make you feel better, but it only wastes your time and money.

A much bigger data security hole is to overwrite all copies of the data that’s to be deleted.  This is not a problem if you are wiping an entire hard drive, but if you are trying to delete a single sensitive document, you have to worry about temp files, shadow copies, backups, file fragments, the Windows swap file, etc.

Jan 14

Experts from more than 30 U.S. and international cyber-security organizations jointly released a consensus list of the 25 most dangerous programming errors that lead to security bugs and cyber-crime.

The impact of these programming errors is significant.  Just two of these errors resulted in more than 1.5 million website security breaches during 2008.  These breaches allowed malicious software to take control of the computers that visited those web sites, turning their computers into zombies that committed further cyber-crimes.

Shockingly, most programmers do not understand or look for these errors.  Colleges rarely teach programming students how to avoid these errors.  And most software companies don’t explicitly test for these errors before releasing their products.

Continue reading »