Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This kind of incompetence makes me seriously doubt that Google is doing anything to more substantially review apps for deeper security issues, either statically or at runtime.

The asymmetry of effort in this situation is profound: consider that you spend all your time and effort writing a, long, complex, thoughtful message by hand (the app), taking hundreds or thousands of hours, but then they respond with machine generated message that cost them 10ms of computer time.

The solution, I think, is obvious: Google needs a "zone defense" with the play store, a much larger (and expensive) staff, to do in-depth app reviews and be a stable, stateful relationship with the developer over time. This person would, in fact, become a 3rd party "expert" on a small set of apks and their contents, with a "feel" for what is changing over time, with the core mission of protecting users from malice, but working with devs, as a human being.



> This kind of incompetence makes me seriously doubt that Google is doing anything to more substantially review apps for deeper security issues, either statically or at runtime.

I actually know the team that does security vuln automation for Google Play. They've found millions of vulns in apps over the years. One of the challenges they face is precisely this sort of headline: how do you use static analysis to find vulns and ensure that you don't inundate users with false positives, forcing them down the admittedly limited support channels.

> The solution, I think, is obvious: Google needs a "zone defense" with the play store, a much larger (and expensive) staff, to do in-depth app reviews and be a stable, stateful relationship with the developer over time. This person would, in fact, become a 3rd party "expert" on a small set of apks and their contents, with a "feel" for what is changing over time, with the core mission of protecting users from malice, but working with devs, as a human being.

This sort of exists. Google pays external hackers who find vulns in popular apps via a rewards program. These don't need to be Google's apps. There may be other systems for top partners or specific kinds of apps (the org is big) but I'm not aware of anything personally.

Expanding beyond a small subset of apps is challenging. Not only are there millions of apps, each app contains tens, hundreds, or even thousands of individual apks. The staff needed to have a concierge for each app would be absolutely freaking enormous, perhaps even larger than the number of people on the planet who actually have deep security expertise on the Android platform.


Sounds to me like what you're saying is that the walled-garden approach of needing to approve every app that ever gets developed as a whole is what's infeasible without creating kafkaesque conditions dealing with their automation, and I fail to see how you've made a case for this automation being all that good in the first place, given that your main argument for it is that it's found "millions of vulns in apps over the years" but then later cite "millions of apps" as a reason you can't expand the support team.

I would suggest that it might be worthwhile to use OS-level features to stop apps from behaving maliciously in more general ways, but a lot of what I would consider malicious behavior (e.g. sending user analytics to third parties, feeding them misleading ads, messing with other processes, etc) is part of google's business model or claims of added value in many cases, so that seems unlikely to happen.

You are nonetheless astute to point out that we can't really blame the individual or even group-wise incompetence of their support teams here. What it is worthwhile to blame is the entire business model of trying to own and control a platform that supports so many users in the first place without giving them the autonomy to self-govern. No company can possibly be so many things to so many people and not screw them over. In a way, it's the same problem planned economies have. Even making the very generous assumption that this is never out of malice or greed, we can still view the major problems millions of people face due to this scale and inflexibility as practically inevitable.


> Sounds to me like what you're saying is that the walled-garden approach of needing to approve every app that ever gets developed as a whole is what's infeasible without creating kafkaesque conditions dealing with their automation, and I fail to see how you've made a case for this automation being all that good in the first place, given that your main argument for it is that it's found "millions of vulns in apps over the years" but then later cite "millions of apps" as a reason you can't expand the support team.

People definitely make that claim. I don't think I fully agree. From the reviews of this particular system I've seen, they are able to actually hit virtually zero false positives. The challenge is that this comes at a high cost of missed issues, which also generates complaints.

> I would suggest that it might be worthwhile to use OS-level features to stop apps from behaving maliciously in more general ways, but a lot of what I would consider malicious behavior (e.g. sending user analytics to third parties, feeding them misleading ads, messing with other processes, etc) is part of google's business model or claims of added value in many cases, so that seems unlikely to happen.

Unfortunately, people also get pissed when platform behaviors are locked down to prevent abuse. Heck, people demand to have access to rootkits despite also wanting it to be impossible for a malicious app to harm them.


They can’t really have their cake and eat it too.

Either they have a system with zero false positives and they should have a review team for all the complaints of things that slipped through. Or they should aim for zero false negatives and have a review team for anything which gets stuck.


A false positive is "rejecting an app because of a behavior that isn't actually present." You've got them backwards.


> I actually know the team that does security vuln automation for Google Play. They've found millions of vulns in apps over the years. One of the challenges they face is precisely this sort of headline: how do you use static analysis to find vulns and ensure that you don't inundate users with false positives, forcing them down the admittedly limited support channels.

Pay people to do tiered support.

"but that costs money"

Make less. Or don't have an app store.


> Pay people to do tiered support.

This exists. You can sign up for a contract that will grant you various tiers of support.


>Google pays external hackers who find vulns in popular apps via a rewards program.

How does that work? Is the submission farmed out to a 3rd party as part of the verification process, and proactively checked? Or is it reactive, similar to a bug bounty? Are there people out there making their living running apks in desktop simulators looking for issues?

I always wondered about the economics of checking huge quantities of arbitrary code (well, bytecode) for vulnerabilities, even for a 30% cut (which is probably 0 for 99% of apps, right? I would expect a power law distro). Kinda sounds like Google solved this by running the apks through something like a CI/CD gauntlet and then...hoping for the best.

And of course you can't be too transparent or bad actors will game the system. It's almost as if, as a sibling commentor mentions, it's just not possible to adequately run a walled garden that adequately detects malice at scale.

Here's an idea: instead of charging 30%, you should waive that if the dev team agrees to vet 5 other apps for you, over time, especially the open source ones.


> How does that work? Is the submission farmed out to a 3rd party as part of the verification process, and proactively checked? Or is it reactive, similar to a bug bounty?

Bug Bounty. Person finds vuln in popular app. Person submits vuln to Google. Vuln gets reported to developer. Person gets paid.

> Are there people out there making their living running apks in desktop simulators looking for issues?

Most of them use tools, I think. I don't know stats on any individual who is making bank off this but given four figure payouts per issue I could definitely believe somebody living in eastern europe or whatever is making bank on this.

> I always wondered about the economics of checking huge quantities of arbitrary code (well, bytecode) for vulnerabilities, even for a 30% cut (which is probably 0 for 99% of apps, right? I would expect a power law distro). Kinda sounds like Google solved this by running the apks through something like a CI/CD gauntlet and then...hoping for the best.

I'm not sure it is just hope. I don't know how that team works specifically, but I know that they aren't just saying "hey we hope it works" in their reviews with leadership.

> Here's an idea: instead of charging 30%, you should waive that if the dev team agrees to vet 5 other apps for you, over time, especially the open source ones.

If you think that Google's policy enforcement and support is a kafkaesque nightmare now, could you imagine if your app was booted off Play because some other devs working at some company you've never heard of decided your app was bad? How would Google evaluate the quality of these investigations? With only five apps you don't have enough volume to develop a reputation so Google would either be forced to repeat all of the investigations or simply have zero oversight over the process.


>could you imagine if your app was booted off Play because some other devs working at some company you've never heard of decided your app was bad?

I would assume they'd give a reason for booting the app, which could be verified by Google and the author. I would imagine the more likely error mode would be simply clicking "okay" without actually looking at the code at all. You know, like some devs do with code reviews!


That sounds awesome to me, but if they were going to do that the cost of submitting an Android app (or the % take by Google on sale) would have to sky rocket to make it worth it. As someone who rejoices in small developers, I would hate to see that.

I think it's ok to do automated review for round 1, but I would like to see a human field appeals. Over time I would also think that will help find edge cases and cracks in the automation so that it can be further improved.

Edit: Based on other comments here, it sounds like that may be what Google is starting to do




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: