![]() In fact, the bug with Cloudbleed is one that, as far as anyone understands, affects all of their customers, and any information once leaked (which could be done entirely passively) cannot simply be retracted: a lot has been said about Google going to heroic efforts to try to scrub their caches, but in reality people have continued to report finding more files that contain private information that were cached, even pages whose cache has been purged still show result snippets with private information, and there are numerous crawlers that people wouldn't contact. In today's case, their article on the subject talks about how quickly they mitgated this issue, but it makes it seem like the issue is something that requires active attack and that they have no reports of such a thing happening. While some people (including a developer at Google's Project Zero) have claimed CloudFlare to have an "excellent reputation for transparency", I would instead claim that it is mostly the case that CloudFlare likes to look "triumphant": their reporting of issues is extremely detailed when it comes to wide-scale service and software bugs (ones that affect everyone, not just them), and their reporting of governmental data requests is absolutely admirable but their reporting tends to talk up their involvement in helping others (as has been the case with Internet-scale routing issues and denial of service attacks), puts them at the center of PR that had little to do with them (as was the case with Heartbleed, where they even were assuring people that private key data could not be leaked: they were later proven wrong), and downplaying their own issues (as we saw yet again with what some are calling Cloudbleed). All I managed to get from Cloudflare were replies on forums that insisted their somehow not noticing this serious of an issue was OK as it only applied to websites that opted in to this mechanism (as if they were "asking for it" P). AFAIK, there was never any disclosure or post-mortem from them about the issue. This issue was active for the better part of a day, until I managed to teach people at CloudFlare what was wrong with their code. I wrote this in 2012, when CloudFlare (an almost awkwardly popular CDN) introduced a bug into websites they hosted using some of their "advanced features" via injected JavaScript that managed to not just break the website but lock up entire web browers by tickling a scheduler starvation bug in WebCore.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |