Before the report, Part 1: verifying what you actually found
The most dangerous moment in security research is not when you look for a vulnerability. It is when you think you have found one. The discipline of asking 'am I actually sure?' is harder than it sounds.
Something happens when you think you have found a vulnerability. A very particular feeling — part recognition, part excitement, part the specific alertness of seeing a pattern resolve into meaning. That feeling is not wrong. But it is also not evidence. The gap between noticing something that looks like a vulnerability and knowing that you have found one is where a great deal of careful work lives.
This is Part 1 of two. Part 1 is about verification — the process of moving from "this looks like something" to "this is something". Part 2 is about writing it up. Both matter. But a report about something that is not quite real is worse than no report at all: it consumes the receiving team's time, damages credibility, and occasionally obscures the thing you were actually looking for.
The first question: does the code path still exist?
Security research often involves reading source code. Source code is versioned. The code path that looks interesting in the version you are reading may have been modified, removed, or meaningfully changed in the current version. Before you invest significant effort in understanding a potential issue, establish which version you are reading and what has changed since.
This sounds obvious and is routinely skipped. The consequence of skipping it is reports about vulnerabilities that were patched before the report arrived, or — more embarrassingly — reports about code paths that were removed years ago. Check the version. Check the current source. The diff between them is often as informative as the original code.
The second question: is this actually exploitable?
Defence-in-depth exists. Operating systems, browsers, and network services are layered: a problem in one layer may be meaningfully constrained by another layer that prevents it from being practically exploitable. The difference between a genuine vulnerability and a finding that is defence-in-depth is the difference between a report that a vendor can act on and a report that documents something a vendor already knows about and has already mitigated at a different layer.
The test is simple in principle and laborious in practice: can you build a minimal proof of concept that demonstrates observable impact? Not "would produce" impact — actual impact, visible in output, in a crash, in a measurable change in system state. If you cannot build the PoC, you do not yet know whether the finding is exploitable. You have a hypothesis, not a finding.
The third question: what does it actually mean?
Assuming the code path exists, in the current version, and you can demonstrate impact: what is the impact? This question has two parts. The first is technical: what can an attacker actually do? The second is contextual: who is the attacker, from what position, against what target? A vulnerability exploitable only from inside a sandbox is different from one exploitable from the network. A vulnerability requiring local access is different from one requiring no access at all.
Being precise about impact is not about minimising findings. It is about being useful. A report that says "an attacker with local user access can cause a kernel panic" is actionable. A report that says "this could potentially be exploited in a variety of scenarios" is not. The receiving engineer needs to triage your finding against everything else on their queue. Help them do that.
Part 2 will cover the actual writing. But none of it matters if you have not done the work in this post first. Verify the path. Build the PoC. Be precise about impact. Then write.
The honest PING acknowledges uncertainty. Before you report a finding, ask: is this an answer, or is it still an echo? The distinction is everything — for the engineer who receives it, and for the researcher who sends it.