From traditional pentesting to continuous validation: how security testing evolves in dynamic environments
.jpg)
In technological environments that change constantly, evaluating security once or twice a year no longer reflects the real level of risk. This article analyzes how pentesting is evolving toward continuous validation models that combine automation, artificial intelligence, and human offensive analysis.
The current cybersecurity landscape is characterized by a sustained expansion of the attack surface and a growing sophistication of threats. The adoption of hybrid infrastructures, cloud services, and constant integrations with third parties has transformed how organizations manage their technological risk.
In this context, the security posture can no longer be understood as a fixed state, but rather as a variable that depends on configurations, interactions, and operational decisions that change continuously.
Recent data clearly reflects this pressure on organizational defenses. In 2024, more than 40,000 vulnerabilities were added to the National Vulnerability Database (NVD), representing a 39% increase compared to 2023 and a persistent source of technical risk. Meanwhile, the average global cost of a data breach in 2025 reached USD 4.44 million, remaining at historically high levels despite improvements in detection and containment.
These metrics reflect not only volume and cost, but also an environment of growing threats and significant financial consequences, particularly when organizations lack security validation mechanisms adapted to the speed of change in their technological environments.
Limitations of the point-in-time evaluation model
Pentesting has traditionally been the closest mechanism to a realistic simulation of an adversary. Its value lies in the ability to chain vulnerabilities, identify logical flaws, and analyze the potential impact on critical business processes. Unlike tools focused solely on detecting technical vulnerabilities, pentesting incorporates strategic analysis and offensive reasoning.
However, its traditional execution — typically annual or semi-annual — produces a technical snapshot valid at a specific moment in time, which can quickly become outdated. In a context where thousands of relevant vulnerabilities are reported each year and assets evolve constantly, relying exclusively on point-in-time assessments leaves exposure vectors unverified for prolonged periods.
On the other hand, Breach and Attack Simulation (BAS) platforms have provided a significant advancement by enabling automated and recurrent execution of offensive techniques, generally aligned with frameworks such as MITRE ATT&CK. Their main contribution lies in repeatability, traceability, and the ability to measure the effectiveness of controls against predefined scenarios.
However, due to their programmatic nature, these solutions operate on defined catalogs of techniques and predefined sequences. While this strengthens operational consistency, it can limit the exploration of emerging combinations, business-specific logical flaws, or attack chains not initially considered in the simulation model.
In this sense, BAS platforms validate behavior against known techniques, but do not necessarily formulate new adversarial hypotheses or explore creative attack paths outside the predefined script.
Continuous validation: a necessity, not an alternative
The previous discussion reveals an evident tension: traditional pentesting offers analytical depth, while BAS platforms provide frequency and continuous measurement.
Both approaches solve different parts of the problem, but neither, in isolation, eliminates the gap between evaluation and operational change.
When an organization modifies infrastructure, incorporates new services, or adjusts configurations, a validation performed weeks earlier may no longer accurately reflect the current state. Not because it was incorrect, but because the environment itself has changed.
Industry figures — such as the high recurrence of digital extortion schemes and the sustained economic impact of cybercrime — indicate that adversarial activity is persistent. The exploitation of vulnerabilities does not occur sporadically; it occurs continuously, targeting whatever is available at any given moment.
In this context, continuity must be understood as a methodological adjustment to that reality. If the environment changes regularly and adversarial activity is constant, validation cannot depend exclusively on isolated exercises.
The goal is not to increase testing indiscriminately, but to keep security evaluation aligned with the real operational state of the environment.
At this point, automation becomes essential — but not sufficient. The repetitive execution of predefined techniques allows organizations to measure control consistency, but not necessarily to interpret the context in which those findings become relevant.
This is where artificial intelligence takes on a functional role. Not as a replacement for human offensive judgment, but as a mechanism to extend its reach.
AI makes it possible to analyze large volumes of configurations, logs, and testing results, identify recurring exposure patterns, and suggest possible chaining paths between vulnerabilities that, when evaluated individually, might appear minor.
Within a continuous validation model, AI acts as a correlation and prioritization layer. It reduces the time between detection and analysis, models probable escalation scenarios, and facilitates the identification of assets whose compromise would generate greater operational impact.
The value does not lie in automating thinking, but in accelerating processing. The formulation of adversarial hypotheses remains a strategic function; AI helps make it scalable.
Strike’s approach
Based on this methodological evolution, at Strike we have developed an approach that combines continuous automation, artificial intelligence, and human offensive expertise to validate the security posture of organizations operating in dynamic environments.
Our model aims to reduce the gap between operational change and security validation, enabling recurring evaluation of the real exposure of assets.
Strike’s platform performs continuous offensive emulations that analyze configurations, exposed surfaces, and potential attack paths across complex environments. Through artificial intelligence models, the system correlates findings, prioritizes risks according to their potential impact, and models escalation scenarios that reflect plausible adversarial behaviors.
However, automation alone does not constitute a complete validation.
For this reason, Strike’s approach also integrates offensive security experts, who analyze the results, formulate new adversarial hypotheses, and manually validate scenarios that require contextual interpretation or strategic creativity.
This hybrid model enables continuous risk evaluation, combining the processing speed of artificial intelligence with the analytical judgment of ethical hacking specialists.
The result is a validation process that does not rely exclusively on point-in-time exercises or closed catalogs of techniques, but rather evolves alongside the technological environment of the organization.
In a scenario where the attack surface constantly grows and adversaries operate without interruption, effective security is no longer based solely on detecting vulnerabilities, but on continuously validating how they could be exploited under real conditions.
.jpg)



