As an Open Source Intelligence (OSINT) trainer for Social-Engineer, LLC and a speaker on the topic, one of the most frequent questions I get is, “What is your favorite OSINT tool?” There are hundreds, if not thousands, of OSINT tools available to a researcher, so picking one tool to recommend when asked this tends to be difficult. I like to approach this question with my own question, “What are you looking for when researching?” I think that is an important question a researcher must answer before they start an investigation. There is no one tool to rule them all, no matter what anyone tells you. Every investigation is different. Your approach should keep that in mind from the very beginning.
What Kind of Researcher are You?
There are plenty of resources that try to define and explain the term “OSINT.” However, there is one thing that needs to be in the mind of anyone looking into the practice or trying to expand their pre-existing knowledge. You need to define the type of research you will be doing. You can make use of OSINT techniques in so many difference aspects of your life and job. The number of directions it can take you is often overwhelming. Are you a law enforcement researcher, looking for fugitives or missing persons? Are you a corporate researcher looking for details about a potential merger or acquisition? Maybe you’re in information security and do penetration testing or security awareness—each are different, but similar. All of these fields could potentially share certain OSINT tools or use vastly different tools to accomplish the task at hand.
Defining a goal for your research will dictate which techniques and tools will be most applicable for the job. Think about it this way, would you use a manual screwdriver to frame a house, or a pneumatic nail gun to fix a watch? The goal dictates the tools you can and should use.
Automation Has Its Place, But with a Price
Automation is a great utility for large-scale information gathering engagements. When I need to collect as much data as possible in a short period of time, tools that provide some level of automating manual tasks work wonders. Keep in mind, there is a cost associated with any level of automation: verification of data. It is only useful to have thousands of data points discovered if they are all accurate. Once you start seeing inaccuracies in the data, the whole data set’s integrity becomes suspect. Questionable data means you must manually go through those thousands of data points and verify them somehow before you can present them to a client or sponsor of your research.
Whether you notice any inaccuracies or not, it would be disingenuous to present the output of a tool that was run and pass it off as fact without verification of its accuracy. Manual OSINT gathering may not be the most exciting approach to research. However, every investigation will require some level of manual research to either find that one tidbit of information you are missing, or to verify the output of an automated tool.
Sometimes OSINT Tools Don’t Paint the Full Picture
A quick example of this was seen in a recent job a colleague and I were working on for a client. The goal of the research was to learn enough about the target to craft a very specific and targeted email. Also known as a spear phish. Using this attack vector would test the target’s understanding of corporate policy.
My coworker had done quite a bit of research using various tools. As a result, we had quite a lengthy document with verified findings. There were a couple gaps, though. We had possible residential addresses and a partial date of birth, but nothing so concrete that we could hand over this information as-is to the client and say, “We know all these facts about the target.” First, we needed to fill in the gaps. Our time on the project was running out and we needed to button it up or else we couldn’t use some of the details we had spent significant time discovering.
This is when we decided to look for a resource that could tie the birthdate and the residential address together and finally verify whether we were right or wrong. My first thought was to search voter records. Many people are registered to vote in the US, and the data points in question are typically stored together. There are sites that correlate multiple states’ voter records for easy searching, and I use them frequently. Unfortunately, the state this target allegedly resided in was not among the states that were listed in my go-to resources.
Taking a Manual Approach to Complete the Picture
A tool would have taken longer to set up than a manual approach. A quick search revealed that the state we were interested in allowed voters to search for and locate their own voter records, as long as voters could provide some identifying information: Full Name, Date of Birth, Zip Code. Interesting… We had most of that information already.
We entered the full name we found and the zip code we assumed was correct, but we only had the month and year of birth, not the day. Luckily, the web form made the user input their DOB using drop-down options and had no observed limit on attempts. So, we entered in our known data and proceeded to try each day of the month until we either found the record we were looking for or ran out of options.
Sure enough, after a couple minutes of trial and error, day 30 was the right one, and we had 1) a complete and verified date of birth and 2) verified, via a government resource, the target’s registered address. Nowhere else could we connect the address to the target because the target used a trust to buy their house, so there was no record of them owning the house at the address we found for them. Once those were verified, a large portion of our discovered information was now able to be included in our report as it all verified and branched off what were previously just assumptions.
Knowing When to Pull Back
The story above also illustrates another point I would like to make. Knowing when to pull back from a data path comes with experience. If we had twice as much time on that engagement than we did, we may have found that key information in the deep recesses of the Internet…but we didn’t. We had to stop searching all the dead ends we were on and step back and take a new path. In my experience, that is the single hardest thing a researcher needs to learn and remember during investigations. Mountains of data points that do not lead to your end goal are just piles, not products. Sometimes, pulling back sooner than later can open more paths to pursue. Other times, pulling back too early leaves usable data unfound. There is an art to it that an automated tool cannot replicate.
So for me, if I was forced to answer, “What is your favorite OSINT tool?” my answer will always be, “The brain of the researcher.” I know it’s not a sexy answer, but it is the overwhelming truth in my experience. Tools come and go. Some stop working due to changes in the data sources; sometimes the developers decide to stop maintaining their tool for an uncountable number of reasons; and sometimes your favorite tool is superseded by another that does the job more efficiently or more accurately. Only the researcher can make the decision to keep using, switch away from, or abandon the tool—a tool won’t do that for you.
You can attend the next class I am teaching in Norway at Hackcon15, or check our events page for other upcoming classes.
Written by: Ryan MacDougall
Sources:https://www.youtube.com/watch?v=DSEGmdzs9Kg&list=PL9fPq3eQfaaA_Wd3dSrA8WWdUrQm3k2Ix&index=5&t=0s
Image Source: https://www.netclipart.com/isee/obwwiw_clipart-of-tools/
Comments are closed.