I have broken up this recon guide into manual and a more automated appraoch. Special thanks to @jhaddix and @tomnomnom, for their guides and tools!
References: - The Bug Hunter's Methodology by @jhaddix - Bug Bounty Program with @TomNomNom and @Nahamsec
This section was mostly currated from @jhaddix's methodology.
Manual: http://bgp.he.net
Find popular or less popular pages. Can see what ad tools were used on the site and related sites.
builtwith.com
- Spider the site using burp.
- Set a burp target scope
- Select all sites in "Site map" to spider.
- To get the sites out of burp:
- Right click selected hosts
- Go to "Engagment Tools" -> "Analyze target"
- Save report as html file
- Copy hosts form the "Target" section
- google search removing things you already know.
site:twitch.tv -www.twitch.tv
site:twitch.tv -www.twitch.tv -watch.twitch.tv
- Amass will automate this for you.
- Subfinder https://github.com/projectdiscovery/subfinder
- github-subdomains.py https://github.com/gwen001/github-search/blob/master/github-subdomains.py
- shosubgo https://github.com/incogbyte/shosubgo
- Cloud Ranges - scans AWS, GCP and Azure for SSL sites within certificates. (Sam Erb automated this.)
Searching is only as good as the wordlist you provide. https://github.com/assetnote/commonspeak2 https://github.com/assetnote/commonspeak2-wordlists/tree/master/subdomains
amass enum -brute -d twitch.tv -src
- Has a built in list but you can supply your own.
amass enum -brute -d twitch.tv -rf resolvers.txt -w bruteforce.list
- resolvers.txt would be a list of DNS servers.
amass enum -brute -d twitch.tv -rf resolvers.txt -o subdomain.out
- Send output to a file.
- shuffledns (wrapper around amass) https://github.com/projectdiscovery/shuffledns
- favfreak: https://github.com/devanshbatham/FavFreak
- Get the hash of the favicon and use Shodan to search with that hash.
masscan -p1-65535 -iL $ipFIle --max-rate 1800 -oG masscan.log
- Sends output to nmap service scan -oG
- Sends output to brutespray
To help prioritize work.
- Aquatone
- httpscreenshot
- eyewitness
https://github.com/EdOverflow/can-i-take-over-xyz
This section was mostly currated from @tomnomnom's methodology/toolset. During the walk through Tom used shopify as an example, make sure to subsitute below where appropriate.
sudo apt-get install go
go install github.com/tomnomnom/assetfinder@latest
go install github.com/tomnomnom/anew@latest
go install github.com/tomnomnom/httprobe@latest
go install github.com/tomnomnom/fff@latest
Your ~/go/bin/
should look like this now.
ls
anew assetfinder httprobe fff
Set path to ~/go/bin
in .zprofile
echo "path+=(~/go/bin)" >> .zprofile
source ~/.zprofile
cd /tmp/
wget https://github.com/findomain/findomain/releases/latest/download/findomain-linux
chmod +x findomain-linux
sudo cp findomain-linux /usr/bin/findomain
cd -
# Create and go into our workspace for this session.
mkdir -p ~/recon/shopify; cd ~/recon/shopify
# create a file with the 2 noted wildcards in hacker1.
cat wildcards
shopifykloud.com
shopify.com
# Pipe wildcards to `assetfinder` then to `anew` which keeps unique strings.
cat wildcards| assetfinder --subs-only | anew domains
cat domains | httprobe -c 80 --prefer-https
findomain -q -f wildcards | tee -a findomain.out
Tom created a new file with some of the domains from the above command called from-findomain. Simply looked for all sites that ended with a .com, could use a regex.
grep -v '¡' findomain.out > from-findomain
I ran grep -v .com findomain.out
to verify I did not miss any domains
Cool.. next command ... to further build out availible hosts.
cat from-findomain | anew domains; cat domains | httprobe -c 50 | anew hosts
Query and save each page.
cat hosts | fff -d 1 -S -o roots
Branching off from tomnomnom. Send each page to burp or the like proxy.
cat hosts | fff -d 50 -x http://localhost:8080