Back in December, it really doesn’t feel that long ago, I talked about how I was prepping for a project.
The end goal is to brush up on Network Security Monitoring (NSM) and use it to better monitor my home network. I occasionally check the logs but think I would be more active if I had a centralized tool to help. Right now, I have a log of blocked domain alerts in my PFSense Firewall’s PFBlocker-NG reports screen. Most of the entries are tracking related that the Pi-Hole isn’t blocking and is getting to the second block list on the firewall.
Note: I say my home network, not my home lab. As I said in the past, I no longer maintain a home lab due to cost and space. I have parts of my network isolated, but I wouldn’t call that a lab.
I’m getting ready for a fun little project with a friend. Several years ago, while doing my undergrad, I got a copy of Chris Sander’s “Applied Network Security Monitoring.” I was going to do a book study group at school when it came out, but it turns out it was a required text for my Incident Response course.
Sadly, that class was a mess, and I don’t think we used the book in it at all. A different friend and I referenced the book to build a project for one of our other classes. We used it to build several honey pots, with what was supposed to be centralized logging. That, however, failed due to the Data Center we put the logging server in. The DC we picked for the log server didn’t allow logging to that DC for some reason. The other ones through the VPS provider would have worked fine. Just not that one. No clue why. We did complete the project with the honey pots but had to monitor each one instead of having central logs.
Anyway, talking about burnout recently with friend one mentioned above, we both feel burnt out. We don’t want to do anything computer-related after work. Studying, Udemy, Coursera, Hack-in-The-Box, Try Hack Me, scripting, blogging, etc. To get around this, we’re going to work through Applied Network Monitoring, and we’re also going to blog about it.
Before confirming this was the book and project we would do, we asked Chris Sanders via Twitter if the book material was still relevant. He said the concepts would be, but the tools would be different now.
It should be fun.
Once my friend gets his blog set up, I’ll link to it too. And yes, I know I still have some OpenFAIR/CTI/OSINT related content I want to blog about; see the comment about being burnt out above.
I don’t remember which podcast or who said it, but “Garbage In Gospel Out” is so true. Especially when talking about Cyber Threat Intelligence. I talked a little about this before, both in conference talks and in Validate Data Before Sharing.
But here it is, three years later, and the problem remains. I’m willing to say it is getting worse. We’re not running full life cycles, either Intelligence or Incident Response. We get to the collection phase and call it done. NixIntel has a good post on that at their blog.
Shortly after rebuilding my Raspberry NAS last year, it stopped working again. The system wouldn’t boot right, even after changing the micro-SD card with a new build of Raspbian. A few weeks ago, I bought a new Pi 3 b and rebuilt the NAS again. When the system powered on, one of the USB drives wasn’t working. It looked like the file system was corrupt. During attempts to recover the USB drive, it stopped being readable. Both my Windows and Linux computers could see the USB drive, but not the file system, or the data. I re-formatted the disc drive, and it started working fine, it just had no data on it.
In my last post I talked about how I played with docker on a VM I constantly re-stage to original state. Some of what is below can be found on my Peerlyst post too.
Considering how long it took to download the images, I decided on a fresh revert, to install the remnux images after updating the box, and installing docker.io.
Using the thug image, I found that the container image doesn’t work match the directions on the Remnux site, Docker Hub page or on the Github page.
However reading the docker file gives the needed information.
The first thing wrong is the way thug is ran now. To run thug one has to do
But before that, to run the container, and be able to get logs, the following has to be used.
sudo docker run --rm -it -v ~/logs:/tmp/thug/logs remnux/thug bash
/tmp/thug/logs is the current working directory in the Dockerfile on Github.
At work, we have this thing on Fridays called power up time. It is the last 4 hours of the week to work on personal projects, test new ideas to see if they are worth implementing, or self improvement. Most weeks it is when I get to look at the most tickets doing Tactical level intelligence since the rest of the week is filled with project or priority case work.
Recently while working on tactical level information for SOC tickets, I was able to add in a little fun, and actually power up. I wanted to do some reverse engineering of the malware associated with the ticket, to see if there was more IOCs that could be extracted.
Earlier in the day I read an email in the SANS DFIR alumni list, which included someone talking about using Remnux with docker. So later in the the day working the ticket, and because I didn’t have a Remnux box, I decided to check out the docker containers. This was also my first time working with docker as well. Starting at Lenny Zeltzer’s Remnux Docker Site.
I went to my linux vm, a box that gets reset to the fresh installed state via snapshot after each use. After a sudo apt install docker.io and a sudo docker pull remnux/pescanner I had the container.
I ran it and learned a little bit about docker. I also got an understanding of some of the information that VirusTotal displays under the detail tab.
About a month ago, I added a couple shell scripts to my DFIR Github repository. Three of the four scripts are used at work daily in either a Linux terminal, or a Cygwin terminal. The fourth script is something I use to help with quarantined mail, and not really DFIR based.
b64Decode.sh and hexConvert.bash take command line arguments and reports back the result. For example:
~$ b64Decode.sh 4piiZXhjaXRpbmcgbmV3cw==
A while ago, created a new repository on GitHub for the scripts I wrote for DFIR. Since then, it only had the Computer Ping script in it. Today I added the first of the Extractor scripts.
The first extractor script, xHttpExtractor.py came about from a web based tool I used. It would run on a file uploaded to it, and then list a bunch of indicators, system artifacts, url calls outs, network communication, etc. However the tool didn’t have a good export mechanism at the time. So I would copy and paste everything to a text file, and then extract the url host details from the text files. Mainly so I could add the URL indicators to the web proxy.
I re-wrote a script I use at work. It was a messy bit of Python 3 previously. While it’s still not the cleanest of python scripts, it scratches my itch. It was originally just a straight line of commands with lots of repeated code. I made some functions and made it a little more modular. I know I need to learn PEP8, and start following it. This was just to improve something that I wrote previously with things I learned from Automate the Boring Stuff with Python.
I’ve shared it via my GitHub repository for DFIR scripts. They’re clean (not tied to any company). There is only Computer Ping for right now. There are 3 scripts all based on the same idea.
Ping a computer, if it is down, keep trying every 10 minutes for 1 hour. Pop an “alert” if the target is up, or the script finished before it came up.
- 1 version for Windows running python (wComputerPing.py)
- 1 version for Windows running Cygwin (cwComputerPing.py)
- 1 version for boxes running Linux. (lComputerPing.py).
I’ll write others and upload to the repository as I have time / re-write stuff I use. They might not all be Python, but my goal is to be more Python than not.
Has anyone else noticed that Bing has lots of Domain Generation Algorithm (DGA) links in the search results? it isn’t on every search, just some searches. But the same search result at Google doesn’t return the DGA links.