Author
Jim Gogolinski
Head of Threat Research
Category
Conceal Recon Group
Published On
Aug 6, 2025
How Content and Context Help Uncover Hidden Threats
In our previous blog entry, we discussed how traditional detection methodologies are not sufficient for detecting today’s modern threats. Theat detection is a never-ending cat and mouse game. Let’s look at the lifecycle of the traditional threat family:
An adversary comes up with a new threat vector and initiates a campaign against a target set. Depending on the adversary, the collection requirements, and the sophistication of the attack vector, the campaign target set will be identified and the attack initiated.
Some percentage of the attacks will be successful, and the campaign will remain undetected for a period of time.
The attacker will continue to utilize the attack vector for as long as possible.
At some point, the attack will be uncovered by a victim. The information will be disseminated, and security vendors will enhance their products to detect this new capability.
The attackers will continue to utilize the attack vector as it will take time for the new detections to migrate out into the field. This is important because while new techniques continue to arise, we still need to be able to detect older attacks, even years later.
The adversaries will then realize their vector has been detected so they will modify it just enough to be undetected. The advanced actors have access to most of the off-the-shelf security products and can test their attacks against them before launch. For those slightly less advanced, there are also sites on the dark web that offer this service as well. Thus, the cycle begins anew.
One of the main issues that allows that cycle to continue is that many detections are based solely on static content. When a detection analyst is creating a new detection, there is always a battle between writing a detection that’s too precise and one that is too loose. In the first case, the detection is not likely to trigger any false positives; however, even unmodified it may not alert in some environments and if the threat changes just a small amount, the detection will not fire. The converse is writing a detection that’s so loose that while it will, in fact, alert on the threat, it will also generate a number of false positives. A high volume of false positives creates a chain reaction with the SOC analysts. First, it generates a lot of work for them as they have to process and close all the alerts and second, it builds into a lack of confidence in the detections and due to detection fatigue may cause the analyst to not analyze findings as thoroughly as possible. Both of these are not good outcomes.
Likewise, we also touched on indicators of compromise (IOCs). As mentioned, IOCs still have a place in threat detection; however, they are not always as reliable as we would like them to be. If we look at domains for example, companies like Microsoft, Amazon, and Google provide hosting space for their customers. Many times, their whole domain and IP address spaces are (incorrectly) added to allow lists. This is typically for a few reasons, it often happens because a user needs to access something in one of those spaces so an allow rule may be added to the security policies. Companies may tend to look leniently on the “top X” internet site lists. Just because a site is popular does not mean it cannot serve malicious content. Blocking by IP addresses also becomes a challenge for security teams. Remember just a few minutes ago we talked about the tradeoff between being too precise and losing detections, URLs play a major role in that as well. Blocking by top-level domain may not be possible based on the domain itself. Likewise, how far you traverse down the URL path may impact detectability. Let’s also not forget that in many instances of advanced threat actor TTPs, the malicious domains may only be active for a few days before they shift to other compromised infrastructure.
There are many ISP providers that host a large number of unique websites on a singular IP address. If one of those IP addresses gets added to your block list, inevitably you will have users complaining because they cannot access some site. This is even worse for the security analyst because they then must prove that the actual hosting computer isn’t compromised but rather just a single (or multiple) site(s) on that platform is bad.
Now we know that IOCs are of limited value so that leaves us with content. We’ve already seen some of the shortcomings of content-based detections and learned about the false positive versus false negative issues when using content detections. So, what can we do to improve our detection rate? Content in context.
The reality is that heuristic detections are a driving force behind stopping malicious threat actors. Heuristic detections are not as exciting as knowingly finding the newest exploit from APT28; however, many of those discoveries started with a heuristic detection and a lot of analysis by analysts and researchers. Like most other detection methodologies, heuristic detections run the range from specific to very vague. For example, we may know this is a phishing attempt, an info stealer, or a first stage downloader. There will also be times when you’re not sure what’s going on, all you can say for sure is “this ain’t right”. Interestingly enough, “this ain’t right” is often the basis for zero-day findings. You are correct in that this sounds a lot like static detection, and it would be if it were not for adding context into the mix.
Context more elaborately defines and enhances content. What was once a single dimension has now become multi-dimensional. This gives us the ability to bring the content into a more sharp and narrow focus. With this new dimension, we have vastly more surface to examine and monitor to find the anomalous characteristics and behaviors. The combination of the two also allows for slightly more lose individual component detections because when they’re overlayed, analysis of the union is much more conclusive.
Without going into too deeply down the rabbit hole, some examples of context include characteristics of the destination environment, the user’s environment, dynamic modifications, behavioral changes, and historical information.
Utilizing the concept of content in context in addition to using the most up to date and well curated IOCs provides tremendous security against attackers from the outside attempting to gain credentials, information, and access. In our next entry, we will look at how content in context can help thwart inadvertent and malicious attempts to exfiltrate data from inside of your networks.

