Red Hot Cyber
Cybersecurity is about sharing. Recognize the risk, combat it, share your experiences, and encourage others to do better than you.
Cybersecurity is about sharing. Recognize the risk,
combat it, share your experiences, and encourage others
to do better than you.
UtiliaCS 320x100
LECS 970x120 1
DevSecOps: Integrating Security into Your Development Process

DevSecOps: Integrating Security into Your Development Process

7 January 2026 11:52

When it comes to application security, there’s rarely a single problem. It’s almost always a chain of small flaws, poor decisions, and missing controls that, when added together, pave the way for serious incidents. This is also why relying on a single security tool is a dangerous illusion: it reassures management, sure, but it doesn’t prevent incidents from arriving—on time—at the most inopportune moment.

Over time, the industry has developed different approaches to address different categories of risks. There are many types of application security tests, and truly understanding their value means knowing what they detect , what they don’t, and, above all, how to integrate them . Only in this way can we effectively reduce the attack surface and prevent problems that would otherwise emerge too late.

Guidelines can offer important support, but in a complex ecosystem—and even more so in enterprise contexts—translating principles into structured, rigorous, and effective processes is far from trivial. The reality is nuanced, full of exceptions and compromises.
Yet, this is precisely the effort worth making. And we will try to do it.

The DevSecOps model: what it is and how to apply it

The DevSecOps model (we’ve talked about it extensively in this series) represents a natural evolution of the DevOps concept, integrating security directly into the software development lifecycle. While DevOps focuses on collaboration between developers and operations teams to speed application releases, DevSecOps introduces security as a shared responsibility , distributing tasks and controls throughout the entire supply chain. In practice, every phase of the development process— from design to productionmust consider security as a fundamental element , rather than an afterthought. This resilient approach allows you to reduce vulnerabilities, identify issues before they become critical, and create more robust applications from the start.

A central tenet of DevSecOps is continuous system validation. Each component of an application is tested multiple times, often by different people, to ensure that any errors or vulnerabilities remain undetected. This repeated testing, performed by different teams across the entire IT chain (Development, Testing, Operations) using automated tools, dramatically reduces the risk of security vulnerabilities and ensures that every change or update is secure. In this context, security is no longer an isolated task for the Security department, but a collective effort that requires the active participation of developers, testers, operations teams, and security specialists.

The philosophy behind DevSecOps emphasizes that security is a shared responsibility: only when all members of the development chain perform their roles correctly can truly secure applications be built. Therefore, it’s not enough to simply implement advanced security tools and write dirty code; we need to foster a culture in which every professional is aware of the risks and necessary controls to implement during their work. Collaboration and communication between teams are therefore essential , as any error or oversight can have significant impacts on the entire system.

DevSecOps isn’t just a theoretical concept: major industry players, such as the U.S. Department of Defense (DoD), Tesla, and Netflix, are adopting this methodology to ensure their code is reliable and secure. These organizations demonstrate that by seamlessly integrating security and development, it’s possible to reconcile rapid releases with data protection. Adopting the DevSecOps model thus allows for building more resilient and reliable software, protecting users and infrastructure without slowing innovation.

Static Application Security Test (SAST)

Also known as SAST (Static Application Security Test), this in-depth code analysis can identify vulnerabilities without actually running the application. The tool analyzes the source code or intermediate representations to understand where user-supplied data could end up in risky locations, such as database queries, shell commands, templates, or the file system . This method is one of the most effective approaches for detecting bugs early, when fixing them is still relatively inexpensive.

The strength of static analysis is its ability to effectively identify the most common flaws: injections, insecure deserialization, weak cryptography, authorization errors, dangerous functions, and cases where input data fails normal validation.

This is especially useful when the analysis is integrated into the build pipeline and therefore needs to be performed during the software development phase: the developer detects the issue almost immediately, rather than seeing it in a report a month later.

There’s also a well-known weakness: scanners produce a lot of false positives. False positives are vulnerabilities that the scanning software labels as such, but which in reality aren’t detected in production.

If an application features non-standard data flows, custom wrappers, and complex permissions, the tool may miss the real issue and bombard the team with alerts. As a general rule, the following guidelines are commonly established: rigorously monitor critical vulnerability classes and classify the remaining ones based on risk and context.

  • Devsecops Phase: Development and Build Pipeline (Code, Build, Test, and Plan)
  • Preconditions : knowledge of the application domain, dependencies and libraries
  • When to apply it: While writing the code and before each release.
  • What it detects best: injections, unsafe calls, input data handling errors, and typical code vulnerabilities. For example, “broken access control” (BAC) vulnerabilities, which are inherent to application behavior, are not detected at this stage.
Tool Type Notes
CodeQL Open Source / Free Powerful for custom queries, great on GitHub
SonarQube Community Edition Open Source / Free Code quality analysis + basic security
Bandit Open Source / Free Python-specific, detects vulnerabilities in your code
Brakeman Open Source / Free Ruby on Rails-specific, static vulnerability detection
Checkmarx CxSAST / Checkmarx One Commercial / Paid Enterprise depth, supports multiple languages and CI/CD integration
Fortify Static Code Analyzer Commercial / Paid Historical and very reliable for large codebases
Veracode Static Analysis Commercial / Paid SaaS, CI/CD integration, and advanced reporting
Snyk (Code + SCA) Commercial / Paid Modern, DevSecOps oriented, supports vulnerability detection and dependency management
GitLab SAST (Ultimate) Commercial / Paid Integrated into GitLab Ultimate, easy CI/CD integration and automatic reporting

Dynamic,Application Security Test (DAST)

Also commonly called DAST (Dynamic Application Security Test) , this involves scanning the application in its execution environment . Dynamic testing works in reverse: the application is launched and attacked as a “black box” through interfaces: web pages, mobile clients, APIs, queues, and integrations. This is closer to reality, as vulnerabilities often manifest not in the source code, but in a combination of settings (hardening), the environment, and user behavior on the applications, along with real-time data.

The dynamic approach is effective at detecting misconfigurations, vulnerabilities related to outdated frameworks, authentication and session management issues, invalid security headers, XSS, certain injections, exposed admin panels, and “interesting” server responses that return unnecessary information. Hardening and segregation are particularly important for APIs: often, everything seems fine until you start pushing the boundaries of types, permissions, and call sequences.

Dynamic analysis is limited by visibility: if a section of functionality is hidden behind permissions (e.g., authentication or a specific profile) or requires a specific scenario, the scanner may not be able to reach it. Another typical problem is the ” noise ” of low-level vulnerabilities that technically exist but don’t pose a real risk. These aren’t “false positives,” but rather low-impact vulnerabilities that alone don’t lead to any compromise. Therefore, dynamic test results almost always require contextual verification and prioritization. Less invasive than SAST, but necessary.

  • Devsecops phase: development and build pipeline, control phase (Release, Operate, Deploy, Monitor)
  • Prerequisites : knowledge of the application domain, dependencies, and libraries. It can be used in both blackbox and authenticated (whitebox) modes.
  • When to apply: in any substantial modification of the code and always on exposed surfaces on internal surfaces
  • What it detects best: misconfigurations, patching of out-of-date components, session issues, injection vulnerabilities (SQL injection, XSS, etc.), and some interface and API vulnerabilities.
Tool Type Notes
OWASP ZAP Open Source / Free Open source web scanner, with plugins and CI/CD integration
w3af Open Source / Free Modular framework for web vulnerability detection
Nikto Open Source / Free Simple yet effective scanner for web servers
Arachni Open Source / Free Modular web scanner, suitable for automated testing
Burp Suite Community Edition Open Source / Free Basic scanner and proxy, for limited manual and automatic testing
Acunetix Commercial / Paid Modern web/app scanner with support for many vulnerabilities, including authenticated scanning
Tenable Web Scanner Commercial / Paid Part of the Tenable ecosystem, great for web scanning and Nessus/IO integration
Burp Suite Professional Commercial / Paid De facto standard for web penetration testing (scanner + proxy + manual toolkit)
Qualys Web Application Scanning (WAS) Commercial / Paid SaaS platform with broad coverage and enterprise reporting
AppSpider (Rapid7) Commercial / Paid Automatic web application and API scanner, with DevSecOps integrations

Interactive Application Security Testing (IAST)

Known in technical jargon as IAST (Interactive Application Security Testing), this interactive test falls somewhere in between: the application runs, but an agent or advanced diagnostic runs within it, monitoring which functions are actually called and what data flows through the code. Essentially, it’s an attempt to combine the best of both static and dynamic approaches: less guesswork, more facts.

In IAST tests, it is necessary to install software as an “Agent” within the applications which are controlled by a centralized infrastructure, with all the pros and cons that this entails.

Thanks to its “inside” visibility, interactive testing often provides a thorough way to trace data, from input parameters to the malicious operation. This helps developers quickly understand exactly what needs to be fixed and reduces the number of empty alerts. This approach works particularly well in environments with extensive integration testing: familiar scenarios are run while the agent simultaneously collects signals about potential vulnerabilities.

Again, the problem is that the vulnerability verbosity is problematic, as highly critical vulnerabilities (e.g., from 9.8) can be safely included because they are analyzed from within the server, but will never be used by an attacker because they are present in pieces of code that are not executed in production.

The limitations are clear: agent integration, proper environment configuration, and adequate test coverage are required. If the functionality isn’t tested, interactive testing won’t be able to resolve it on its own. Therefore, it rarely exists alone and usually integrates static and dynamic approaches.

  • Devsecops Phase: Development phase and build pipeline (Release, Operate, Monitor)
  • Preconditions : Use of Agent installed on the machine, knowledge of the application domain to be able to recognize false positives
  • When to use: During application testing and during recursive operation
  • What captures it best: Confirmation of real data traces, vulnerabilities that manifest themselves in specific scenarios.
Tool Type Notes
DongTai IAST Open Source / Free Open-source tool for Java, detects runtime vulnerabilities through instrumentation and CI/CD integration
Contrast Community Edition (CE) Open Source / Free Free for open-source projects (1 app, 5 users), supports Java and .NET, basic IAST functionality
Hdiv Security Community Edition Open Source / Free Free edition for Java and Spring, simplified runtime detection and reporting
AppSensor (OWASP) Open Source / Free Open-source framework for active detection of runtime attacks in Java applications, useful for learning
Contrast Security Assess / Protect Commercial / Paid True IAST enterprise, CI/CD integrable, full runtime coverage and vulnerability analysis with code context
Synopsys Seeker (Black Duck Seeker) Commercial / Paid IAST platform with active verification, supports Java, .NET and enterprise web applications
Hdiv Security Enterprise Edition Commercial / Paid Advanced IAST solution for large web applications, including runtime analysis and vulnerability management
Veracode IAST Commercial / Paid Integrated into the Veracode platform, it combines static and dynamic analysis with runtime monitoring
Checkmarx IAST Commercial / Paid Part of the Checkmarx ecosystem, runtime vulnerability detection and DevSecOps integration

Analysis of Software Components and Bill of Materials (SBOM)

A modern application is almost always composed of more elements than just code. Dependencies, packages, container images, frameworks, and even front-end builds— all of these elements provide functionality and introduce vulnerabilities. Component analysis, also known in technical jargon as SBOM (Software Bill of Materials) , helps understand which versions of libraries are in use, whether they have known issues, and how urgently they need to be fixed.

It’s important not to turn the process into a hunt for every imaginable patch . Sometimes a vulnerability seems scary, but it’s unattainable in your context, as we discussed in the previous chapter. Sometimes it’s the opposite: technically “average,” but it fits perfectly into your architecture and perhaps can be used in conjunction with other, equally low-impact, vulnerabilities to create a very critical real-world impact vector. Defining priorities, defining acceptable licenses, and publishing a component list (SBOM) helps avoid having to reconstruct the overall picture in the midst of an incident.

A notable advantage of dependency analysis is that it’s usually easily automated: the check is performed during the build, prompts appear in the repository, and the team receives a clear list of “what to update.” Publicly available vulnerability databases, such as OSV and NVD , are often used as sources.

  • Devsecops Phase: Development phase and build pipeline (Release, Operate, Deploy, Monitor)
  • Preconditions : This is a “sweep” of the machine. Knowledge of the application domain is required to recognize false positives.
  • When to use: When running applications and when updating dependencies and libraries
  • What it detects best: known vulnerabilities in third-party components, risky licenses, outdated packages.
Category Tool Type Notes
Open Source Syft Free Generate SBOM from images/containers
Open Source Grype Free Vulnerability Scanner with SBOM
Open Source CycloneDX Generator Free SBOM CycloneDX Multi-Language
Commercial Anchor Enterprise Paid Complete SBOM/SCA platform
Commercial Mend SCA Paid Generate SBOM + vulnerability management

Penetration testing: simulating a real attack

Penetration testing is always useful when it comes to covering application vulnerabilities, as it simulates an attack by an ethical hacker. The professional doesn’t simply “detect” vulnerabilities, but creates a pathway that, through their interconnection, allows illicit access to the system and thus causes damage.

Penetration tests are always useful if they aim to simulate a real cyber attack. However, penetration tests that merely perform reconnaissance, without providing the “real impacts” detected on a system, add little value to automated techniques.

The real impact, specifically, answers the question “what could a cybercriminal do by exploiting system weaknesses?” If we can answer with ” Access to the database and exfiltration of clients by knowing a low-privileged user account from the Internet”, or “Access to credit cards including CVV/CVC codes in pre-auth mode from an API exposed on the Internet”, or “Access to the employee database including telephone number, name, surname, address… etc… by exploiting a credential with a predictable password from the Internet”, or “possibility of implanting malware or shutting down the system infrastructure”, then you are on the right track .

Yes, because this type of security testing seeks out what’s called a “potential data breach,” a security incident that never occurred, which attempts to be remediated before a “real bad guy” attempts to exfiltrate the real “crown jewels” from corporate infrastructure. Therefore, a mature approach to this issue includes preparation. Agreeing on a perimeter, rules of work and “play” (with the SOC, the Blue Team), test accounts, secure configurations, and acceptance criteria.

Without this, the process quickly devolves into a mere ” bug reporting at all costs” exercise , and this work serves no purpose other than wasting company money in the wrong places. A good penetration testing report typically describes the problems encountered using a top-down approach, starting with the attack vectors that define the “real impacts,” and then the attack path, reproducibility conditions, recommendations , and a realistic risk assessment.

It’s important to remember that penetration tests are not a replacement for periodic audits, such as SAST, DAST, IAST, SBOM, and so on. Rather, they serve to “finalize” and “control” what previous technologies cannot “see,” provided they are done well.

  • Devsecops Phase: system up & running, after major releases, for critical systems, after major infrastructure changes (OPERATE and MONITOR)
  • Preconditions : expensive analyses as they require specialized personnel.
  • What it detects best: It simulates a real attack and detects the attack chains used by a potential cybercriminal. It identifies application logic errors and vulnerabilities covering all security levels (hardening, patching, secure code development).
Methodology Focus Ideal for
NIST SP 800-115 – Technical Guide to Information Security Testing and Assessment Compliance & Governance Official NIST standard; widely used for audits and regulated contexts; defines clear phases (Planning, Discovery, Attack, Reporting); less technical on exploits
PTES – Penetration Testing Execution Standard Operational / offensive Very practical; covers the entire pentest cycle up to post-exploitation; realistic “attacker-centric” approach
OSSTMM – Open Source Security Testing Methodology Manual Scientific / metric Rigorous and quantitative approach; includes metrics and scores; also covers physical, wireless, and social engineering
OWASP Web Security Testing Guide Web & API Web app and API testing benchmark; aligned with OWASP Top 10; great for authentication, session, and business logic testing
CREST Penetration Testing Methodology Quality & Certification Used by CREST-certified companies and professionals; strong focus on ethics, quality, and reporting
ISSAF – Information Systems Security Assessment Framework Academy, generic audits A broad framework that includes pentesting, auditing, and risk assessment; less widespread today but conceptually sound

Fuzzing (random mutation and testing of input data)

Fuzzing may seem, at first glance, a brutal and ineffective approach. It certainly makes you think of someone slamming their hands uncontrollably on the keyboard, and in fact, that’s not so far from the truth.

Flooding the system with garbage is partly true, but it’s intelligent and controlled garbage . The underlying idea is to systematically generate or modify input data with the goal of forcing the application into unexpected behavior, such as crashes, logic hangs, abnormal memory consumption, or inconsistent states that should never occur under normal conditions.

This technique is particularly effective when applied to components that handle complex parsing logic , such as parsers, file handlers, network protocols, and, more generally, any module that interprets structured data from outside. It is precisely in these areas that serious validation errors and incorrect assumptions about inputs are concentrated.

In modern applications, fuzzing is successfully used on APIs, altering parameters, types, and limits, on file processing mechanisms during loading and conversion, and on data serialization and deserialization processes, which expose powerful Remote Code Execution (RCE) flaws. A well-conducted fuzzing activity goes beyond identifying simple crashes but, as mentioned, allows for the uncovering of deeper vulnerabilities that, under certain conditions, can lead to remote code execution, memory leaks, information disclosure, or the circumvention of security controls.

As invasive or annoying as the approach may seem, it’s always preferable for these anomalous behaviors to be discovered in a controlled test environment , where “destructive” activities can be performed, rather than in a production environment or, worse, exploited by a real attacker. However, from an operational standpoint, fuzzing is a complex technique and is difficult to master. It requires time, computational resources, and careful configuration. It’s necessary to determine which components to test, how to effectively measure code coverage, how to manage and deduplicate generated crashes, and how to transform a raw error into a useful and understandable result for developers. In practice, therefore, the tendency is to start with the areas considered most risky and gradually expand coverage over time.

  • Devsecops phase: system in test environment, especially after changes in parsers and communication protocols (TEST)
  • Preconditions : highly specialized personnel
  • What it detects best: data processing, overflows, hangs, crashes, unexpected conditions.
Product Application Type License / Model Pro Notes
AFL / AFL++ C / C++ binaries Open Source The “de facto” standard for coverage-based fuzzing.
libFuzzer C / C++ binaries Open Source In-process, excellent for testing specific libraries.
OSS-Fuzz Open Source Projects Open Source (Service) Google infrastructure integrating AFL, libFuzzer, and Honggfuzz.
Honggfuzz C / C++ binaries Open Source Supports multi-threading and hardware-based branch counting.
Peach Fuzzer (Comm.) File formats, protocols Open Source (Legacy) Historic “generation-based” fuzzer (now part of GitLab).
ZAP Fuzzer Web App / API Open Source Part of OWASP ZAP, great for HTTP input.
Radamsa File format Open Source Extremely versatile “black-box” mutational fuzzer.
Boofuzz Network protocols Open Source Successor to Sulley, ideal for custom protocols.
Atheris Python Open Source Coverage-guided fuzzer for Python code and C extensions.
Go-fuzz Go Applications Open Source Specializing in finding panics and bugs in Go.
Mayhem Binary / Complex Apps Commercial Combine fuzzing and symbolic analysis (Next-gen).
Defensics (Synopsys) Protocols, IoT, Telecom Commercial The top of the range for robustness testing on network protocols.
Burp Suite Pro Web Application Commercial The standard for manual and semi-automatic web fuzzing.
AppSpider Web App / API Commercial Focused on dynamic DAST scanning.

Agentic Systems, based on Artificial Intelligence

For years, Breach and Attack Simulation (BAS) has represented an important step toward automating offensive security, allowing organizations to simulate real attacks and evaluate the effectiveness of defensive controls. However, these systems have always exhibited a structural limitation: predefined, deterministic, and highly monolithic behavior, confined to specific systems, techniques, or attack vectors. In other words, the BAS observes and replicates known scenarios, but struggles to adapt and imagine new scenarios like a human attacker would.

The advent of agent systems based on generative AI marks a profound paradigm shift. It’s no longer a matter of executing a sequence of codified techniques, but of autonomous agents capable of observing, planning, acting, and adapting based on context. Thanks to large language models and multi-agent systems, it’s now possible to explore complex attack surfaces with a speed and breadth that significantly surpass the manual approach of traditional penetration testers. This operational advantage is not marginal: it drastically reduces discovery time and expands the scope of explored possibilities, but there are some drawbacks…

The problem, however, remains unchanged and is far from trivial. In operational and production environments , the fundamental question is always the same: how can we allow these agents to operate without direct human control, while maintaining a non-invasive, silent, and safe behavior ?

Even with traditional BAS, maintaining this balance was a herculean task, because the pervasiveness of an attack is always inversely proportional to the system’s stability and operational speed. With autonomous agents, capable of learning and making autonomous decisions, complexity increases exponentially. A poorly configured agent can easily cross the line between simulation and catastrophic impact on a system (to take it to the extreme, such as abusing an SQL injection to wipe out a live database).

These agents don’t just run exploits or scans, but are also capable of reasoning about outputs , concatenating heterogeneous information, choosing alternative strategies, and adapting their behavior in real time. This goes beyond the constant and unpredictable risk of “hallucinations,” especially in contexts where they must “invent.” In practice, they are increasingly approaching the cognitive model of a real-world attacker, capable of moving laterally, changing tactics, and exploiting emerging opportunities. This is where the line between testing and realistic simulation begins to blur.

The challenge of the coming years will therefore not only be technological, but also ethical, operational, and governance . Agentic systems for penetration testing promise unprecedented realism, but require new control models, clear operational limits, and advanced observability mechanisms. Without these elements, the risk is introducing tools into corporate environments that behave too much like real attackers, without the same responsibilities and constraints.

  • Devsecops phase: system in test environment, as a continuous scanning cycle (TO DO)
  • Preconditions : personnel specialized in the use of these systems which are still in the standardization phase
  • What it detects best: attack vectors, unknown security bugs, overflows, hangs, crashes, unexpected conditions.
Tool Released Typology Description
PentestAgent January 2026 Open Source AI agent for automated penetration testing, designed to autonomously orchestrate attack and analysis phases
Shannon December 2025 Open Source AI-based continuous pentesting and red teaming framework, oriented towards adaptive simulations
Villager December 2025 Open Source AI and LLM-based attack framework, integrable with Kali Linux, focused on aggressive and realistic behaviors
BruteForce AI September 2025 Open Source AI system for intelligent brute force attacks, optimized through machine learning
HackSynth February 2025 Open Source LLM-assisted penetration testing platform, oriented to cognitive support of the human tester
Mantis August 2025 Open Source Defensive tool that automatically simulates attacks through generative agents (LLM).

Common mistakes that ruin even a good cyber program

One of the major mistakes in application security is relying on a single tool and considering the problem closed . Static analysis (SAST) fails to detect configurations or behaviors in the execution environment, while a dynamic scanner (DAST) often fails to understand business logic. Similarly, dependency checking (SBOM) doesn’t protect against your own authorization errors or poor development practices. Without an integrated combination of approaches, you’re inevitably limited to choosing which classes of problems to ignore, leaving significant security gaps.

Another common mistake is neglecting the results management process. Tests and scans produce large amounts of information, but if they aren’t prioritized, validated, and transformed into concrete actions , the team quickly perceives these outputs as noise and the process loses credibility. It’s crucial to define consistent criticality rules, short review cycles, and clear assignments of those responsible to turn results into measurable interventions. Metrics also matter: it makes sense not to rely on the total number of issues detected, but rather on the number of issues related to the actual risk that have been resolved.

Finally, the third mistake is demanding perfection from the start. It’s much more effective to start with a minimal set of controls, achieve operational stability, and gradually increase rigor. Otherwise, the pipeline slows down, reports pile up, and users begin to bypass controls, compromising the very value of the implemented security. Security perceived as an obstacle or nuisance is not respected and, in fact, is not security at all.

In conclusion

The primary challenge in all security programs, today more than ever, is commitment . Advanced tools or sophisticated methodologies are not enough: CEOs , IT and Security directors, and team leaders must fully understand the need to work in a structured and coordinated manner, and translate this vision into concrete behaviors throughout the organization. Only when a culture of security is spread from the top to the rest of the company’s workforce will an ecosystem be created in which each individual knows their role and responsibilities and can significantly contribute to reducing risks.

The real world, however, isn’t perfect, and security isn’t built overnight. Everything develops gradually , with incremental steps, continuous iterations, and countless mistakes. These include adaptations to technological and business changes. Zero risk simply doesn’t exist: even after investing millions in people, processes, and software, a data breach could still occur. This doesn’t mean failure, but it highlights that security is a journey, not a destination , and that a system’s resilience depends on the ability to learn from incidents and constantly improve (back to mistakes).

Ultimately, everything we’ve seen teaches us a fundamental truth: tools alone are useless without people who understand them and interpret the results . Very often, people think that buying an appliance, plugging it into the rack, pressing a start button, and having the lights come on will solve the problem.

In reality, that’s where it all begins. Security requires awareness, expertise, critical analysis, and the ability to act continuously . It’s a long journey, in which technology, processes, and people must move together, otherwise the best solutions remain just a theoretical promise.

Follow us on Google News to receive daily updates on cybersecurity. Contact us if you would like to report news, insights or content for publication.

Massimiliano Brolli 300x300
Responsible for the RED Team and Cyber Threat Intelligence of a large Telecommunications company and 4G/5G cyber security labs. He has held managerial positions ranging from ICT Risk Management to software engineering to teaching in university master's programs.
Areas of Expertise: Bug Hunting, Red Team, Cyber Intelligence & Threat Analysis, Disclosure, Cyber Warfare and Geopolitics, Ethical Hacking