VIDEO: An intro to and hacker safety

Talk from @hacknotcrime & @redteamvillage #HackerCon​ 2021

This talk is an update on The Project in 2021, some of the new things we've been working on, and the general state of hacker safety in 2021. This was a part of the and Red Team Village event - I highly recommend checking out the other videos from this event!

Transcript (by

Thank you, Brian. Good afternoon. Good morning. Good evening. Good day. Good whatever it might be for you. I'm coming to you from Sydney, Australia. It's about 7:00 AM right now, so it's definitely good morning on my side, but great to be chatting with you all. And huge thanks to the Red Team Village and to the Hack not Crime crew for putting this on. Just watching the lineup come together, to see all the talks so far, this is a subject that I'm just absolutely passionate about, and it's great to see the work being put into to spread the word and get conversation going and have all that sort of thing happen. So thank you so much for having me be a part of that, and thanks for putting it on the first place, I think it's great.

So yeah, my name is Casey, we've covered that off. Some of the details here, I've got the hack persona and the hustle persona. I actually grew up doing security on a technical basis on the offensive side, and then moved across into entrepreneurship and business, eventually became the founder of Bugcrowd, And the rest is history I think to some degree there. We actually pioneered the crowdsource security as a service model, so we didn't invent vulnerability disclosure and Bug Bounty, but we're kind of the first to come out with this idea of putting a platform in the middle to make it all work. So that's really the lens of a lot of what I'm going to talk about. This is not a Bugcrowd talk per se, but a lot of what actually went into Disclose IO came from experience and some of the interactions and friction that I observed and that the team has with Bugcrowd that's now evolved into the things that we're trying to solve with Disclose IO.

So let's rock and roll. Obligatory disclaimer, I am not a lawyer, even if I was, I'm not your lawyer. I've seen this come up a few times in talks, and I think that bit, I often say that kind of tongue in cheek, but it is important to know. Listening to legal counsel from people who are lawyers I think is a really valuable thing to do, but if you're in a position where you really feel like you've got something you need to work with, get your own lawyer. That's a really important call out there. But I play a lawyer on TV sometimes, so let's just roll with that. If in doubt, get legal help.

So what we're going to go through is a little bit of backstory. [Ameet 00:02:56] did a phenomenal job of covering a bunch of this before, which is great because that means I can actually spend a bit more time just digging into the project and where we're up to and how people can actually participate and help. And Holly I think did a phenomenal job as well on setting the backdrop around what's happening with various laws. But what I'll talk to briefly is how Disclose IO came to be in the first place. What is it, how do I use it, what's new, and then a call to action. I think the thing that I would love out of this talk is for people to understand what we're trying to get done here, to potentially identify ways that you can participate and help out, or at least go out and basically spread the good word. And you'll see why in a second.

So this is a lot of content to fit into 30 minutes. I'm going to buzz through these slides, but this deck is pretty heavy on links like the different repos, the different things that we've got going on and all that sort of stuff like a directory and how to find your way around on the project. It'll be up after this on SlideShare, and I'll be on the discord as well if anyone wants to reach out and chat. So let's go.

So problem statement, this is something that I learned as an entrepreneur, done a bunch of fundraising and different things like that. So pitching is my native space when it comes to storytelling, and you always try to start with the problem, right? Excuse me. In this case, really the problem is that from the law's perspective, we're here to talk about hacking is not a crime. The whole reason hacking is not a crime is a thing in the first place is the law thinks the hacking is probably a crime. CFAA, the computer misuse act, all the different laws that exist around the place, they for the better part don't really have an idea of good faith. It's like if you're hacking a computer, you're probably a criminal, so let's just start there. But that obviously creates problems for us to try and help out, right?

From the organizational standpoint, people that want to actually engage that or receive input from that, the problem that creates for them is I need to create a legal exception for finders and hunters, right? The challenge that creates is that for lawyers we're talking about still a fairly novel and fairly uncharted area of law just in general, and what lawyers tend to do when they're uncertain about stuff is they basically just write a lot of words. You end up with policies and different things that are put out there that are really long, they're really difficult to read, they're full of [inaudible 00:05:28], there's all of these different things that... Honestly, you can't actually fault the lawyers for that much. That's their job to do things completely, but the problem for the hackers is that they're probably not lawyers as they're reading this stuff. They might be ESL English as a second language, and honestly, people rarely actually read the T's and C's fully.

In the case of people trying to do proactive vulnerability research, it's usually just getting to the scope and getting going. So yeah, all of those things together, you can see the friend from South Park there, you're going to have a bad time. And honestly, this was something that we observed in the first probably year of Bugcrowd, all of these different things existing as this preeminent state. And then the problems that it creates. The biggest one from my perspective is that as a hunter, as a finder, as a security researcher, as a hacker who's trying to do work or trying to submit something that you've found, you actually can pretty easily end up in a position where you're creating legal risk for yourself that you maybe didn't even realize was there in the first place. So the question we started asking ourselves and really what I was noodling on a lot was how do we make it easy to do this well?

This is the piece that I think Ameet closed off really well in her talk. And if you weren't there for the stream or to watch that live, I do encourage you to go back and take a look. We're not the first to stumble across this issue and try to solve it. Rain Forest Puppy as the OG with the RF policy back in 2001, basically trying to codify what should people do? How should we approach this? How do we make language that people can just pick up and use to be able to set expectation and communicate how to interact in this conversation between hunters and organizations that are trying to receive their help.

Bugcrowd and Cipherlaw partnered together to create the open source vulnerability disclosure framework in 2014. ISO came out, the DOJ came out with stuff. Ameet, as she mentioned before, did a ton of really excellent work, not just zoning in on this issue of safe harbor and legal safety, but also I think actually surfacing the problem, frankly. A lot of people have been working on a similar space up to that point, but not a lot of people really cared. And I think Ameet was the linchpin in actually driving awareness around this as well. So really what happened is in 2018, I had the idea to basically grab a domain, create a brand around it and start to focus these efforts, create a focal point for people to be able to contribute towards. But then also to be able to use the idea of Disclose IO as a thing that ultimately becomes a net benefit to companies that implement this type of stuff beyond just making it easy for them to start a VDP.

Which is kind of what this slide is talking about. The idea is, all right, we've put all this work into making language and implementation of this thing as frictionless as we can. In 2018, the difference between then and 2014 when we first started working on this, is that the concept of bug bounty programs and vuln disclosure, even the idea of a hacker being potentially a digital locksmith and not just inherently a burglar, that had become more common, more accepted. It had actually become almost perceived as a benefit. And I still feel like we're on that journey, there's a lot of distance to go there, which you probably just heard in John talk with some of the pitfalls that can pop out. But we're at a point now where neighborhood watch for the internet I think is a fairly intuitive concept for-

... right now, where neighborhood watch for the internet, I think is a fairly intuitive concept for the consumer, and for really anyone, even people in the boardroom to understand. And we're in a point in time where the risk of bad things happening through computers, I think, is fairly obvious as the possibility for basically everyone.

The question became, how do we make the adoption of vulnerability disclosure programs with best practice, go viral? How can we make this something that's such an obvious and clear thing for people to want to do, that that takes over some of the momentum of organizations like Bugcrowd and others, that are trying to promote this stuff, but even the hacker community itself, trying to get organizations to hear about what we've found? How can we kind of create enough momentum for it to become a snowball that rolls down the hill?

Really our vision within is a healthy and ubiquitous internet immune system, enabled by security research reporting and disclosure. It's the second time I've seen this screencap of Karen so far. Honestly. shout out to her for, I think creating a really good touchpoint for people to actually understand the ability for hackers to be helpful and not just harmful. She actually used the phrase internet immune system quite a bit in that talk, so I'm giving her some props where props are due.

Our mission really is to standardize and promote neighborhood watch for the internet. The standardization part is important in terms of setting precedent and reducing friction. The promotion part is important for people to actually become aware of the fact that this is a thing that they should do in the first place. But then as I said before, to make it desirable, make it something that the market actually wants to do. Ultimately, if they step into it and say, "This is a thing that is just probably going to be a part of how we do business on the internet in the future anyway," we might as well get ahead of it, right?

That's my firm belief around VDP. This idea that between now and the heat death of the universe, anyone who runs any kind of IT infrastructure whatsoever is going to have to deal with someone like John or someone else in the research community who's found a problem with this stuff. That's a physics issue. It's not really as much an issue of choice, I think for the vendors and people running organizations anymore. And that's becoming more clearly true, as more of this type of thing happens.

So how do we make that easier? How do we get ahead of that, like snowball the thing down the hill? But then from the standardization perspective, help people do it well. Help people actually put legal protections in place and all of the different things that are needed to make this successful.

So really the end game is a virtuous cycle. What I'd love to see is for the seal to become almost like the green padlock. And I used that analogy deliberately because the green padlock is in the process now of going away. I would love to see that happen, for it to get to the point where we go through this phase where folks are looking for this type of thing to feel safe on the internet, but then all of a sudden it becomes so standard and so normalized that we don't really need it anymore. We're a long way from that right now, which is a part of the reason for the initiative in the first place. But that's kind of where I'd see it heading and where I'd like to get it to. I think it's a pretty easy kind of vision to kind of glom onto. For folks that ask me the question, like, "What is this all about? What are you doing? Which part of this is most important?" This is what I try to bring them back to because it really, I think, captures the essence of what we're trying to get done.

What is Really, it's a collection of open... It started off kind of as that timeline suggested before, is this collection of open source projects, which consolidated into this initiative, this movement. We're now in the process of filing for a 501C3. There's literally, lawyers talking to each other and paperwork flying back and forth as I'm speaking to y'all right now, which is great. I think having it contained and having it be something that can be in a position to get some funding in, to get project management happening and all those different things is going to be a really useful tool. But also establishing it as its own thing.

I think the policy side of this mission is one that we've worked pretty hard. Bugcrowd's been a significant contributor to this project, but so have many others. Part of the goal there is to make sure that it's actually seen from the outside as a standalone thing, in the way that it actually is. So it's 100% community powered and maintained. It's coordinated by a Slack and GitHub. There's about 50 core members in the Slack, 200 contributors or so, and there's 2,300 organizations in the database.

The screencap here is a bunch of people that have contributed. Some of the faces and names you'd recognize from this conference. We get to hanging out and what I love almost the most, frankly, about this project is that it just brings me around some of the most intelligent, thoughtful, passionate people in this space, because this is really an enabler and a precursor to a lot of the other stuff that goes on in security research and cybersecurity. So that's who we are.

What is it? What do we do, or what have we put together? Really, there's kind of five core pillars to it. Three of them I'm going to speak to today. The other two, we can talk about offline or you can follow some links and check it out for yourself. It's the terms, it's the list, it's the seal, it's the community, and there's basically this kind of almost magnet for innovation I think, in a sense around how to improve vulnerability disclosure that the project's created. So there's this sort of list of other useful stuff that we're putting together that doesn't necessarily fit into those first four categories.

I told you I was going to power through this. There's a lot of slides, but hopefully y'all are tracking. So to begin with the terms, after I have a sip of water here. The whole thing, as I said before, is up on GitHub and people can contribute and consume it in that sort of way. The idea behind the terms, this is really kind of where it all started. It's the boilerplate vulnerability disclosure policy templates. They're created and contributed to by lawyers, program owners and policymakers. And the thing I think that's most important, which goes to my whole lawyers get verbose when they're nervous comment earlier, is that it's structured to balance legal completeness, brevity, and readability.

So the idea is how can someone as a hunter who doesn't actually necessarily even understand security reporting, let alone law or potentially English, how can they have a reasonable chance of being able to read through this stuff and get what's going on? How do we solve that problem? It's been pretty amazing with this. I think Jack's going to speak about this in a little bit, where we've seen things like the Safe Harbor clauses that kind of got focused up into disclose or picked up, by the the voting machine manufacturers, by some of the States, in order to get ahead of some of the things that ended up happening in the 2020 election from a cybersecurity vulnerability, but also a cybersecurity uncertainty standpoint. That was one of the reasons that we're pushing pretty hard on that one. And it was a part of the, it didn't necessarily solve the problem, but I like to think it helped. And that actually drove adoption of those sorts of clauses into that solution.

It's come up in a whole bunch of other places as well. Check the list out, you'll see. The programs that are up there with full Safe Harbor more often than not, actually have language in their briefs, which is pretty neat. You can see some of the organizations that people work for, that have actually contributed to this project. So nother little disclaimer there, these aren't endorsements by those organizations of the project. I want to be clear about that. But this is the kind of individual, the kind of contributor that we're in a position at this point to see actually contributing to the stuff. They've got that kind of context that you would get from organizations like these ones.

How do you use it? For finder what the terms do is they provide an easy reference, an easy kind of thing to point to, to encourage organizations to start a VDP or to adopt Safe Harbor, if they don't already have it. So if you're talking to an organization, some of the stuff that John just talked about around difficulties when you found something, but there isn't a proactive thing established within an organization to receive that information, or to even set a policy around where expectations should sit, if that's not there, you can point them to this and it gives them a really easy starting point. So I think that's a pretty useful tool for finders and hackers. It's recognizable and clearly, I'm kind of laboring this point now, but it can be understood that's the goal. And Safe Harbor honestly, amounts to less chilling effect. The whole idea of the potential threat of legal repercussions, stymieing security...

... threat of legal repercussions, stymieing security research, or if, even if something's found incidentally, security reporting, that's something that we want to just eliminate. And I think that's a good thing. For organizations, it's simple. It makes it easy. This is an otherwise foreign piece of policy creation.

So what we're trying to do is to make it more familiar and get people moving. It's de-risked by the fact that it's open source and there's consensus around the output, as you can actually see that in the repo, the folk that have come in and contributed to it. That makes people that are creating this policy more comfortable with adopting what we've said. Because it's not just, Casey, the Bush lawyer, or other people working on their own variations of this. It's actually a team effort. It's useful for copy pasting or as a foundation.

So let's move on. The list is another piece. So really what this is, is a community-powered vendor-agnostic directory of all known VDPs and bug bounty programs. The goal of this is to be able to basically create a focus point where people can update stuff, they can ... as a finder, you can do a pull request and basically add information that you've discovered around a particular program, or if you're a vendor and you feel like your information is out of date or people don't know, you can go and update that as well.

It's all CCA 4.0, so it's open source in that sense. It's done in JSON and CSV for readability and used basically by everyone. Bugcrowd actually powers its directory off this, and we've seen other people do that too. I think hunters use this quite a lot. We've just added search and some other things. And yeah, that's really the goal. Like how do we create this idea of for finders, the ability for them to find the right contacts for an organization, the search that we've just added, thank you to Andrew, who put that together and contributed has made that a lot more accessible for users of that particular data set. And we've seen usage increase as a by-product of that.

What it does for finders, as well is it helps them feel ... it helps them make safe decisions. It's not so much about feeling safe. How do you decide whether the security research that you want to do is safe or not? And again, going back to the disclaimer, like understand the legal concepts around this, consult a lawyer if you feel like you should, but this is this kind of filter for who is friendly for me to go off and do some work on and who's potentially going to be unfriendly. That's really the goal here.

The other thing for finders is that it encourages them to think about what's happening on the other side. I think empathy in security research, this is another subject that John touched on a lot, is underrated. We're having fundamentally challenging conversations with organizations when we tell them that there's a broken thing. So the whole idea of fostering that empathy, like seeing people missing from that list of various stages of maturity on that list, actually, I think helps people look at the overall landscape and not just the vulnerability that might be working with at the time.

So for organizations, it's a central point. And I mentioned that before, it's a tool for internal conversation, which I think is really powerful. This idea of, "Hey, here are all these other people that are doing this. Maybe we should too." That's a handy thing to have. And really it does serve as a single source of truth for vendors to be able to go out and actually establish what the status is when it comes to Safe Harbor and best practices like CVD. And I'll come back to that in a second, because we've just made some improvements there.

All right. So now there's the seal. So Amit covered some of this off, so I'll just glance across this, but basically the core concepts within the Safe Harbor language is this idea of permission to hack, which we refer to as full Safe Harbor and then permission to, or not permission, more a sense of safety around reporting in the first place, which we refer to as partial.

So for an organization to go out and say, "Okay, we're not going to pursue legal action if you stay within these definitions that we have of what constitutes good faith versus bad faith. You can report to us and feel like you're not going to get a call from lawyers or get your door kicked in or any of that other stuff." That's the starting point.

What's even, I think, more powerful for an organization to do is to actually engage what we call full Safe Harbor, which is the idea of basically going out and declaring that, again, under those same definitions of good faith, you now, as a security researcher are authorized in light of things like the CFAA, like anti-hacking laws, and there's lots of other ones around the world but that's the kind of the granddaddy of them that FARC talk about and probably most relevant to this audience as well, you're exempt under any circumvention laws DMCA, you're exempt from any violations that you might commit against the acceptable usage policy or terms of service. And there's in there a general acknowledgement of good faith.

So the fact that you're kind of saying that, "Yeah, we get what you're doing. We think it's good. We don't think you're a criminal. We're actually being proactive about saying that as an organization so that you know where you stand."

So the seal for finders, it's recognizable. I think if it's not there, finders are going to want to talk to an organization and say, "Hey, you should do this. This is the thing that's going to help us. It's going to help your customers, your peers, like the general internet understand where you stand when it comes to security." You recognize the fact that it's a team sport and that transparency is a leading indicator of maturity at this point in the markets. That's a good thing for you to do. Plus as a finder, it's going to help me feel safe, which means I'm going to help you more.

This whole idea of transparency, turning the security research community from a chilled observer into an active advocate, I think is a really powerful one. And that's what we've seen happen. A lot of the more tenured bug bounty programs involve disclosure programs. They've got a really positive relationship with the hacker community because they've been saying this for a long time. I actually think that's one of the core reasons for that. So this helps speed that up and it helps make it obvious.

For organizations, it's a validated trust mark. It's this idea of, I'm taking part in neighborhood watch for the internet. This is a thing that I'm doing. That leads to the ability to actually promote that to peers, to customers, to the security research community, which I just mentioned before. And honestly, I think what I get excited about around this concept, vulnerability research and VDP is one of the rare things in cybersecurity where you can actually explain it to your grandparents and they'll probably get it. Like if you talk about ADR or any one of these other more technical security controls, they're going to get lost fairly quickly, but this idea of neighborhood watch, but for the internet, that's a pretty easy thing to grok.

So that translates to greater consumer confidence and actually creates the potential for more sales, more people to actually trust that organization and choose it. I think for security, just in general, we've been sort of waiting for that type of thing for a long time. Actually, Emily touched on this a little bit in her talk, this idea that traditionally we're an insurance policy. So how do we turn ourselves into a benefit and get all of these other things at the same time?

If you put them all together, you can kind of see how this works. A company wants to create or update a program. They use dioterms to help them do that, the disclose/dioterms. They update the database. As a result, they're able to use the seal. Another company sees that and thinks, "Where am I up to with this? This is a thing that seems to be becoming normal. I've been able to understand that more clearly and more quickly now." Go to 10. And that's a shout out to all the folk on this stream that cut their teeth on Basic like I did.

All right, let's talk about new stuff real quick. We're going well here. This is good. I was nervous about time, but I think we're going to track all right, and I'm looking forward to chatting on Discord if anyone wants to dig in to any of this stuff. So really what status is, is an extension of the full Safe Harbor and partial Safe Harbor I just talked about. Like how do we create a recognizable ladder towards best practice, acknowledging the fact that most people don't do anything in the first place? That's kind of the status quo.

So at the very bottom, you've got just this kind of core idea of I've deployed a security.text and I want to point people to the right place if they've found something. At the very top, I think really the gold standard from my perspective at this point, and this is an open ...

The gold standard from my perspective at this point, and this is an open draft that people are very welcome to weigh into on with their opinions on this, but my point of view is that full safe harbor with a proactive coordinated vulnerability disclosure timeline and policy, I think that's basically as good as you can get at this point. So how do we create the ability to mark out where people are on that journey and actually, for them, create this ladder to success. As an organization, you only have security.txt. You look at the person on the top of the hill and think, "I want to be there. Why are they there and I'm here?" That actually is a really powerful tool to educate around how things can be improved.

Another one which we just dropped, and again, this goes back to the other stuff, this almost... As an entrepreneur, I love this, obviously, because I'm just an ideas guy and that's the kind of thing that switches me on, but this has been a magnet for people that want to find ways to improve the way that we get this all done. How can you decrease the friction for a finder? How can you make safe harbor and legal protections more obvious and all of those different things?

The idea with dnssecurity.txt is that we make security reporting information more accessible and authoritative. Basically what it is, is putting the equivalent of the security.txt standard in DNS zone files as a text record. What that does is makes it more obvious. You don't have to hunt for it in the same way. You might not be looking in a well-known directory on a particular web server, but you're probably going to look at DNS if you're doing security research, so it jumps out at you straight away. The other thing is that text files in DNS is broadly considered as this authoritative statement on behalf of the company. If it's pointing to a policy, you can look at that policy and feel more confident that it's legit. That's the idea there.

This goes to shout out before as well. I think when people find something and they need help or they're concern, is fear involved potentially in reporting an issue? Or they just can't find the right person. There's a handful of people, of which I'm one, not a handful, a couple of dozen of people that almost invariably get the phone call when folk are in that position. I think that's awesome, but what I'd love to do is to be able to share that load out and actually make that process more sustainable.

We've created a community, and really the core focus of that is that, if you're in a position where you've got questions around how to do it properly, how to get in touch, like needing help getting in touch with the right person, any of that sort of stuff, you can basically go on there. There's volunteers like John, myself, Randy, others who will basically support you in trying to get your information to the right place and guiding you along the way. This is an inherently sketchy thing, I think, for finders to do for the first time as well as on the recipient side. I mentioned before this idea of having a problem being pointed out as this confronting experience. I think it is just as scary for the people that are finding things and trying to get them to the right place, so the whole idea is how do we bring a community around that and actually support it and help it roll forward with less friction and ideally less rough edges, less of the misunderstandings that you hear about a lot.

All right, so if you're still with me, awesome, great. We, as I said, plowed through a ton of stuff and I'm just about to finish up here. Look, really what we're looking for and what I would love to see, I think what everyone who works on the project would love to see is for more involvement. If you want to look at the front end of it, the database link is there. The community, which I just mentioned, you can join a If you're in a position where you can start at VDP or if you've got one that doesn't have these kind of legal provisions that protect researchers in its language and you're looking for help in actually moving that forward, go check out the terms because they will help you. From just a general project standpoint, we're always looking for folk to help out and pitch in from a coding or design or legal standpoint. If you're interested in that sort of thing, hit me up at... Or hit us up rather, because this is a shared mailbox, at or via Twitter.