Tag Archives: failure

Intelligence – Garbage In, Gospel Out

I don’t remember which podcast or who said it, but “Garbage In Gospel Out” is so true. Especially when talking about Cyber Threat Intelligence. I talked a little about this before, both in conference talks and in Validate Data Before Sharing.

But here it is, three years later, and the problem remains. I’m willing to say it is getting worse. We’re not running full life cycles, either Intelligence or Incident Response. We get to the collection phase and call it done. NixIntel has a good post on that at their blog.

Continue reading

One of the differences between college and real life (bias in speaking)

Last talk I have, I expected audience participation, because I asked for it. I failed the audience. I know how to improve the talk for last time.

What was my bias that lead to me failing the audience? I’m used to participation being part of my grade, and having to participate. Others in classes were the same way. Yes we had some that barely participated. But usually half the class did.

Because that’s what I was used to in college class setting, that’s what I expected at a conference talk. The result was I failed my audience with expectations that I shouldn’t have put on them.

One way to try and up the game

Yesterday, I gave my opinion on the how I think we are missing an opportunity. Before it even went live Monday (I wrote it Sunday, and I’m writing this Monday night), a conversation happened. The main point is that Attackers are working together so why are the defenders all playing the Lone Ranger / Zorro by going it alone?

I also had quick twitter conversations with Ch3ryl B1sw4s and Timeless Prototype. One was related to yesterday’s post, one wasn’t. But here are some thoughts to try and up the game on the defense side. I’m not an expert, I’m just some guy working on a Master’s on Cybersecurity to go with my BS in Information Assurance.

The goals:

  1. Have a way for people who work in SOCs, on CIRT teams, Security, regardless of team size, even the guy who has to do it all at the small companies to have a group of peers who can be contacted and discuss things with.
  2. Keep the adversarial attackers out, but allow pen-testers and others access too if they want to join.
  3. Provide enough information to be helpful to each other without putting our companies at risk.

Step 1. Create a Security Operations based Web of Trust

We need a way to validate people. So lets say I’m on a CIRT, I can vouch for all my CIRT members. But if I have been interviewed by another CIRT I can vouch for the members that interviewed me there. That means, I can get those two groups talking and at some point, like a con they can meet.

Step 2. Secure communication channels.

Different options for communication. Out of band forums, chat (IRC), OTR IM, or whatever people think would be the best way.

This is multi-fold.

One it gives us a neutral ground to talk, and putting a layer between our conversations and our employers. For protection, obfuscation is not security but having a group invites attackers. Keep the company names out, and makes it harder to attack them because of our associations. It’s not to hide things from the company.

Two, this way if we have to contact another team with “Hey I’m seeing a lot Viagra ads coming from your domain”, I don’t have to worry about intercept because the mail server or mail dns is compromised.

Step 3. Share sanitized knowledge.

Note I said sanitized. This should make the stake holders at our employers a little more relaxed. They know we are sharing Indicators of Compromise, or hey I noticed this strange thing anyone else seeing it?

It would also be nice if someone finds malware aimed at another company to share that, instead of saying yep, not my company, without having to say all they did to find it. Just say “Hey I found this going after X, anyone else see it on their network. How about X, do you know your a target?

I’m sure this could be fleshed out more. I’m sure there are things I’m missing. I know it’s partly re-inventing the wheel, but really twitter is faster than Infragard on attacks, but with twitter both sides see them a the same time, while things get lost in the noise. I know HTCIA is a thing, but is it’s mission the same?

 

My thoughts on Ashley Madison Dumps: another missed chance to up the IR game

I don’t care what people want to do in there spare time. I don’t care about the teaser dump, I don’t care about the 9 gig dump, I don’t care about the 20 gig dump, and I don’t care about the 300 gigs that Impact Team claims to have.

However as someone who’s job it is to defend the company, a member of the Blue Team, there are responsibilities I have to the company. Instead of DMCA take down notices, Avid Life or at least the Incident Response team, should be working with any non-webmail based domain. So if a company’s domain shows up in the list, they should contact that company’s CIRT team. This allows the CIRT to defend against any possible attacks.

Now granted that the attacks the CIRTs are most likely to see are Spear Phishing and account brute force attacks. It still make sense to share the relevant information. I believe the same about the Anthem and OPM breaches. In all these cases, these have been missed opportunities.

Based on what I’ve done so far, what I’ve sat through in presentations, and what I’ve learned in school not enough of us are working together. Company CIRTs stop at the perimeter when they should probably be sharing information. I’ve seen too many in the industry saying “that’s their problem, let them find it”. Meanwhile how many times have we as an industry seen news stories saying Company X didn’t know they were breached until they were pinged by the U.S. Gov?

I know that Scott Roberts at his Bsides Columbus talk said there were Out of Band forums, and it sounded like the members were from multiple CIRTs, that some people use. But what is the usage like compared to all the CIRTs / Security Teams / Sole Admin supporting the whole company, that could use that kind of forum for help?

Should the CIRT team’s responsibility stop at the perimeter, or should all the teams out there have ways to work together through a web of trust to make attacking harder?

On what planet is General Alexander worth $1,000,000.00 a month?

The news wires reported General Keith Alexander moved in to the private sector, and offering his services to finance companies for a million dollars a month. This is the person that took control as the director of the National Security Agency on August 1, 2005 and left in October 2013 (Wikipedia). Remember, that was after the Edward Snowden leaks came out.

Which really leads one to wonder were those really leaks, or was that a case of we know this is compromised lets make it public knowledge so we can hide the real data. Here is an interesting thought, is Snowden really still working for the U.S. Government?

If you’ve read the Cryptonomicon or seen the Sherlock episode “A Scandal in Belgravia“, you probably know what I mean. For those that need a quick refresher – let assets of lower value go, to hide the assets of higher value. Blow up planes with dead people on them, instead of letting real passenger jets get blown up. Let a German U-Boat sink a freighter or get past the blockade to keep them from realizing that the codes are broken.

The C-Levels at banks should be asking some hard questions if Gen. Alexander is showing up offering them service. Like what really happened on the Snowden watch. How does that failure make his people qualified for the private sector’s needs? Yes while Gen. Alexander may have some Government related attack sources, we already have that in the private sector with Infragard, and the different breach reports.

 

Over Thinking Problems

I think one of the problems we may have in this industry is over thinking the problem, and doing more than is needed for the problem. For example, I upgraded my personal VPS server recently, the one that runs this site and Rats and Rogues. It required a reboot, but because I rarely reboot this box, I keep forgetting that iptables isn’t persistent. I usually remember and restart it fairly quickly when it reboots.

The night of the upgrade wasn’t much different. However I messed up the command, being a lazy admin I use the built in tools to do work for me. I love control-r and how it scrolls through your history based on a few characters you type. Well instead of iptables-restore < firewall.rulz I typed iptables-save > firewall.rulz. Yes, I overwrote my rules with nothing.

My very first thought was WOOHOO I get to do forensics on my live system. I went to twitter to brag, though I’m not sure if people realized that was the point. @secbuff asked why not restore from backup. He was right. The majority of the rules I have are for blocking ssh brute force attempts (the ones that make it past denyhosts), blocking mail relay attempts, and blocking user account enumeration. While playing forensics would be cool, this is a live host on the internet with services that do get attacked. It would have left the box exposed way to long to the internet, and was a case of over thinking the problem.

So I went grabbed a back up file. Instead of  uploading it though, I opened it in a text editor, hand sorted the rules by network number, and then pasted them in to the terminal window. I also finally dealt with that persistence issue too, we’ll see if iptables-persistence.dpkg worked right on the next reboot. Oh and since I add networks on a regular basis (when reviewing my logs) I wrote a small shell script to make two copies of the rules in different locations, with a spare backup.