Those of us who spent time in the security community in the 1990's and 2000's remember the bad old days of bug reporting, when there was a constant drumbeat of stories of security researchers trying to responsibly improve security and software vendors responding to them with legal threats. I have personally been the target of these threats, have stood behind my researchers as a co-founder of a security firm, and have acted as a pro-bono expert witness on behalf of security researchers facing civil and criminal action. I am very happy, personally and professionally, that in 2015 this situation is starting to change. Many tech companies now run paid bug bounty programs, and we have seen initial steps by earlier established industries to encourage and reward responsible bug disclosure. As the CSO of Facebook, I am very proud that we run one of the world's most successful bug bounty programs, paying out over $4.3M to researchers over the past several years, and in my discussions with other CSOs I have always encouraged them to explore these programs as part of a comprehensive security program. These early steps towards a future of better cooperation between security researchers and big companies are encouraging, but the current situation is fragile and could easily revert back to the days of anonymous postings to Full-Disclosure and researchers facing overblown criminal charges. It is due to this danger of reversion that I personally got involved in the situation Wes Wineberg describes in his blog post. Some facts:
  • Wes was one of several people to report to us that Instagram was exposing a Ruby-based admin panel with known flaws.
  • As is standard, we responded to Wes thanking him for his submission and telling him we would investigate.
  • Despite this not being the first report of this specific bug, we informed Wes that we would pay him $2500. Up to this point, everything Wes had done was appropriate, ethical, and in the scope of our program.
  • Wes used the RCE flaw on this AWS instance to get execution and started rummaging around for useful information. He found AWS API Keys. He then used these keys to access an S3 bucket and download Instagram technical and system data (non-user data). The fact that AWS keys can be used to access S3 is expected behavior and would not be considered a security flaw in itself. Intentional exfiltration of data is not authorized by our bug bounty program, is not useful in understanding and addressing the core issue, and was not ethical behavior by Wes.
  • Wes was not happy with the amount we offered him, and responded with a message explaining that he had downloaded data from S3 using the AWS key and was planning on writing about it. We were surprised because he did not mention these actions in his previous correspondence with us.
  • At this point, it was reasonable to believe that Wes was operating on behalf of Synack. His account on our portal mentions Synack as his affiliation, he has interacted with us using a synack.com email address, and he has written blog posts that are used by Synack for marketing purposes.
  • I contacted Jay Kaplan, the CEO of Synack, to explain that we believed that Wes had gone well beyond what is appropriate for a bug bounty and had acted unethically. I told Jay that we would pay out the bounty for the RCE as agreed, and that we would be ok with Wes writing up his finding and exploitation of that bug. I said that we were not ok with him discussing his access of S3 or releasing the data he had taken, as we later explained to Wes directly.
  • I told Jay that we couldn't allow Wes to set a precedent that anybody can exfiltrate unnecessary amounts of data and call it a part of legitimate bug research, and that I wanted to keep this out of the hands of the lawyers on both sides. I did not threaten legal action against Synack or Wes nor did I ask for Wes to be fired. I did say that Wes's behavior reflected poorly on him and on Synack, and that it was in our common best interests to focus on the legitimate RCE report and not the unnecessary pivot into S3 and downloading of data.
  • Jay informed me that Wes' actions were not ordered or condoned by Synack, and I have no reason to doubt him.
  • At no time did we say that Wes could not write up the bug, which is less critical than several other public reports that we have rewarded and celebrated. In fact, one of my engineers involved in this issue once found a great (and in his case, original) RCE, got a big bounty, wrote it up publicly and ended up with a job offer.
  • This bug has been fixed, the affected keys have been rotated, and we have no evidence that Wes or anybody else accessed any user data.
  • Until Wes posted his blog post with no warning to us (and after he had pre-briefed members of the media), I thought we had reached a good compromise.
To be clear, here is our final response to the report including the S3 data: December 4, 2015 at 6:06pm Hi Wesley,
Thanks again for your report. We’re following up with clarification on your request about blogging. Our program supports researchers who want to publish their work after responsibly disclosing an issue in addition to the pay out.
We feel it’s appropriate for you to write up your process for finding and testing the initial RCE on sensu.instagram.com, but not any actions you took after finding that RCE. We ask that you share the draft post with us ahead of time so we have an opportunity to provide feedback.
We hope this is acceptable to you and allows us to continue working together in the future.
Thank you, Reginaldo Security Facebook
We will be looking at our documentation and the operation of our program. We successfully handle hundreds of reports per day, but I don't think we triaged the reports on this issue quickly enough. We will also look at making our policies more explicit and will be working to make sure we are clearer about what we consider ethical behavior.
I understand that taking critical feedback from people I respect is part of being a CSO trying to balance many societal goods. I strongly believe that security researchers should have the freedom to find and report flaws for the betterment of humanity, and I believe it is right to offer them economic rewards for their hard work. I also know that the temporary peace that exists today is based upon the basic idea that bug bounties cannot be used by attackers as a cover for malicious behavior. Condoning researchers going well above and beyond what is necessary to find and fix critical issues would create a precedent that could be used by those aiming to violate the privacy of our users, and such behavior by legitimate security researchers puts the future of paid bug bounties at risk.