After years of depending solely on automated processes to weed out malicious or otherwise undesirable apps from Google Play, the search giant announced on Tuesday that it has added human reviewers to the mix in a move that more closely aligns its practices with Apple's.
"Several months ago, we began reviewing apps before they are published on Google Play to better protect the community and improve the app catalog," Google Play product manager Eunice Kim wrote in a blog post. "This new process involves a team of experts who are responsible for identifying violations of our developer policies earlier in the app lifecycle."
Kim added that the review team clears or rejects apps in a matter of hours, saying that there has been "no noticeable change" for developers since human reviewers were introduced.
The Play Store has relied on software to police submissions since it opened, a policy that has drawn both praise and criticism. Developers were pleased with quick turnaround times, but less scrupulous individuals have taken advantage of the situation to introduce a multitude of fake, useless, or malicious apps designed to trick Android users into installing them.
That is the situation Apple sought to avoid by choosing to employ human reviewers when the App Store opened in 2008. The decision was largely successful for users, with problematic apps rarely making into the store, though developers have often expressed frustration at the lack of transparency in the sometimes byzantine review process.
Google also announced the introduction of region-specific app age ratings. Developers can use a questionnaire to determine the proper rating for their apps in various locales --Â ESRB ratings in North America and PEGI ratings in Europe, for instance.