Vulnerability Disclosure Under the CRA: Setting Up Your First PSIRT
If you make industrial sensors, smart home devices, or any connected product, there is a good chance your company has never had a formal process for handling security vulnerabilities. You might not even have a dedicated security team. That was acceptable in the past. Under the EU Cyber Resilience Act (CRA), it will no longer be.
Starting September 2026, manufacturers of products with digital elements must be able to receive vulnerability reports, triage them, fix them, and report actively exploited vulnerabilities to ENISA within strict timelines. The organizational structure that makes all of this possible is called a PSIRT — a Product Security Incident Response Team.
This guide walks you through what a PSIRT is, why the CRA demands one, and how to build yours from scratch, even if you are a small or mid-sized manufacturer with limited security resources.
Why Manufacturers Need a PSIRT
For decades, traditional manufacturers — companies building hardware, embedded systems, industrial equipment, and IoT devices — treated cybersecurity as someone else's problem. The product shipped, the customer plugged it in, and unless something physically broke, there was no feedback loop for security issues.
That era is ending. The CRA fundamentally changes the obligations manufacturers have after a product reaches the market. Article 14 introduces mandatory vulnerability and incident reporting requirements that take effect in September 2026. If a vulnerability in your product is actively exploited in the wild, you must notify ENISA within 24 hours of becoming aware. You must follow up with a detailed notification within 72 hours. And you must deliver a final report within 14 days.
You cannot meet these timelines through improvisation. You need a standing team with defined roles, documented processes, and the authority to coordinate across engineering, legal, and product management. That team is a PSIRT.
Without a PSIRT, vulnerability reports arrive and sit in a general support inbox. Nobody knows who is responsible. Nobody knows what to do. The 24-hour clock expires, and you are in violation of a regulation that carries fines of up to 15 million euros.
What Is a PSIRT
A Product Security Incident Response Team (PSIRT) is a function within your organization dedicated to receiving, evaluating, and responding to security vulnerabilities in the products you ship to customers.
It is important to distinguish a PSIRT from a CSIRT (Computer Security Incident Response Team). A CSIRT handles security incidents affecting your internal IT infrastructure — a phishing attack on your employees, a breach of your corporate network, a ransomware event. A PSIRT, by contrast, handles vulnerabilities in the products your customers use. If a researcher discovers a buffer overflow in your smart thermostat's firmware, that is a PSIRT matter, not a CSIRT matter.
Some large organizations maintain both a PSIRT and a CSIRT as separate teams. For most manufacturers, especially SMEs, a single small team can handle product security responsibilities, but the scope and processes must be clearly defined. The key principle is this: your PSIRT is outward-facing. It deals with the security of what you sell, not the security of what you use internally.
The concept is well-established in the software industry. Companies like Cisco, Siemens, and Bosch have operated PSIRTs for years. What the CRA does is extend this expectation to every manufacturer placing products with digital elements on the EU market, regardless of size or industry.
The CRA's Reporting Requirements
Understanding the specific timelines is critical because they define the operational cadence your PSIRT must be able to sustain.
When you become aware of an actively exploited vulnerability in one of your products, or of a severe security incident impacting product security, the following deadlines apply:
- 24-hour early warning: Submit a preliminary notification to ENISA through the single reporting platform. This must indicate whether the exploitation is suspected to be malicious and whether there is potential cross-border impact. You do not need full details at this stage — the purpose is to alert authorities quickly.
- 72-hour vulnerability notification: Follow up with a more complete assessment. This includes the severity of the vulnerability, its potential impact, and any corrective measures you have taken or plan to take.
- 14-day final report: Deliver a detailed report covering the root cause, the full scope of affected products, remediation actions taken, and, where known, information about the threat actor or exploitation method.
These obligations apply to actively exploited vulnerabilities and severe incidents. The clock starts when you become "aware" — meaning when you have reasonable grounds to believe the vulnerability is being exploited or the incident has occurred. You cannot delay awareness by avoiding monitoring. The CRA expects you to actively track the security status of your products.
Additionally, you must notify affected users without undue delay, providing actionable guidance on mitigation or remediation steps they can take.
Building Your PSIRT: Team Structure
One of the biggest misconceptions about PSIRTs is that they require a large, expensive team. They do not. For a small or mid-sized manufacturer, two to three people with clearly defined roles can form an effective PSIRT.
The key roles are:
- PSIRT Coordinator: This person owns the overall process. They are the single point of accountability for vulnerability handling, responsible for ensuring reports are tracked, timelines are met, and stakeholders are informed. In many organizations, this is a senior engineering manager or a product security lead.
- Technical Analyst: This person triages incoming vulnerability reports, reproduces the issue, validates severity, and works with the development team to build and test fixes. They need hands-on technical skills relevant to your product stack.
- Communications Lead: This person handles external communications — coordinating with vulnerability reporters, drafting security advisories, managing ENISA notifications, and ensuring customers are informed. In a regulated context, this role also interfaces with legal counsel.
In small companies, one person may wear multiple hats. Your lead firmware engineer might serve as both the technical analyst and the PSIRT coordinator. Your product manager might handle communications. That is fine, as long as responsibilities are explicitly assigned and documented. What you cannot afford is ambiguity about who does what when a critical vulnerability lands in your inbox.
Document your PSIRT charter: who is on the team, what their roles are, what authority they have, and what escalation paths exist. This document does not need to be long, but it needs to exist.
Creating a Vulnerability Intake Process
Before you can handle vulnerabilities, people need a way to report them to you. This is your vulnerability intake process, and it is one of the first things you should set up.
Set up a dedicated email address. The industry standard is security@yourcompany.com. This should be monitored by your PSIRT, not buried in your general support queue. Configure it so that at least two team members receive incoming messages, ensuring coverage during vacations or sick leave.
Publish a security.txt file. RFC 9116 defines a standard machine-readable file that tells security researchers how to report vulnerabilities to your organization. Place it at /.well-known/security.txt on your website. At minimum, it should contain:
- A contact email (your security@ address)
- A link to your vulnerability disclosure policy
- A preferred language for reports
- An expiration date (so researchers know the information is current)
Create a Vulnerability Disclosure Policy (VDP). Publish a dedicated page on your website that explains:
- How to submit a vulnerability report
- What information to include (affected product, firmware version, steps to reproduce)
- Your commitment to acknowledge reports within a defined timeframe
- Safe harbor language assuring researchers they will not face legal action for good-faith research
- Your expected timeline for resolving reported issues
Define an acknowledgment SLA. Researchers expect a response. Industry best practice is to acknowledge receipt within 48 hours. This does not mean you have validated the issue in 48 hours — it means you have confirmed you received the report and it is being reviewed.
Support encrypted communications. Publish a PGP/GPG public key alongside your security.txt and VDP so that researchers can encrypt sensitive vulnerability details before sending them. This is especially important for hardware and IoT vulnerabilities, where exploitation details could put users at immediate risk.
Coordinated Vulnerability Disclosure
Coordinated Vulnerability Disclosure (CVD) is the process by which a vulnerability reporter and a product vendor work together to fix a vulnerability before it is made public. It is the standard approach used across the security industry, and the CRA explicitly expects manufacturers to participate in it.
The typical CVD workflow looks like this:
- A researcher discovers a vulnerability in your product.
- The researcher reports it to you through your intake channel.
- You acknowledge receipt and validate the issue.
- You develop a fix and coordinate the timeline for public disclosure with the researcher.
- You release the patch and publish a security advisory.
- The researcher publishes their findings (typically after the patch is available).
The standard disclosure timeline in the industry is 90 days from the initial report. This gives the manufacturer time to develop, test, and release a fix before the vulnerability becomes public knowledge. Some researchers or organizations use shorter windows (30 or 60 days), and critical vulnerabilities with active exploitation may warrant even faster timelines.
Why does this matter? Because security researchers will find vulnerabilities in your products. This is not a hypothetical. It is a certainty, especially for connected devices. The question is whether those researchers report to you or go elsewhere — selling to vulnerability brokers, publishing directly, or reporting to regulators without giving you a chance to fix the issue.
A published VDP and a functional PSIRT send a clear signal to the research community: "We take security seriously, and we will work with you." Without those signals, researchers have no incentive to cooperate with you.
Triage and Severity Assessment
Not every vulnerability report is equally urgent. Your PSIRT needs a repeatable process for evaluating incoming reports and assigning priority.
Use CVSS for severity scoring. The Common Vulnerability Scoring System (CVSS) is the industry-standard framework for rating vulnerability severity on a 0-to-10 scale. CVSS evaluates factors like attack vector, complexity, required privileges, and impact on confidentiality, integrity, and availability. Most vulnerability databases and security tools use CVSS, so adopting it ensures your severity assessments are consistent and comparable.
Define severity categories and response SLAs:
- Critical (CVSS 9.0-10.0): Remote code execution, unauthenticated access, full device compromise. Response target: patch within 7 days. Immediate PSIRT mobilization. If actively exploited, trigger ENISA reporting.
- High (CVSS 7.0-8.9): Significant impact but with mitigating factors (e.g., requires local access or specific conditions). Response target: patch within 30 days.
- Medium (CVSS 4.0-6.9): Limited impact or difficult to exploit. Response target: patch within 90 days.
- Low (CVSS 0.1-3.9): Minimal risk. Address in the next regular release cycle.
These timelines are guidelines, not rigid rules. Adjust them based on your product context. A medium-severity vulnerability in a life-safety device might warrant the same urgency as a high-severity issue in a consumer gadget. The important thing is to have defined, documented SLAs that your team consistently follows.
Filter out noise. Not every report is a valid vulnerability. Some will be duplicates, misconfigurations, or intended behavior. Your triage process should include a validation step where the technical analyst reproduces the issue before committing development resources.
The Incident Response Playbook
Your PSIRT needs a documented, step-by-step playbook that everyone on the team can follow. When a critical vulnerability arrives at 4 PM on a Friday, nobody should be guessing what to do next.
Here is a ten-step incident response process:
- Receive report. Vulnerability arrives through your intake channel. Assign a tracking identifier immediately.
- Acknowledge receipt. Send confirmation to the reporter within your defined SLA (e.g., 48 hours). Include the tracking ID.
- Validate and reproduce. Technical analyst attempts to reproduce the vulnerability in a controlled environment. Confirm whether the report is valid.
- Assess severity. Score using CVSS. Determine which products and firmware versions are affected. Identify the scope of user impact.
- Develop fix. Engineering builds a patch or mitigation. For hardware products, determine whether a firmware update, configuration change, or physical recall is required.
- Test fix. Validate that the patch resolves the vulnerability without introducing regressions. Test across all affected product variants.
- Release patch. Deploy the update through your normal distribution channels. For IoT devices, trigger over-the-air (OTA) updates where supported.
- Notify affected customers. Issue a security advisory with a clear description of the issue, affected products, and remediation steps. Use direct communication channels (email, in-app notification) for critical issues.
- Report to ENISA if applicable. If the vulnerability is actively exploited or constitutes a severe incident, submit the required notifications within the 24-hour, 72-hour, and 14-day windows.
- Conduct a post-mortem. After the issue is resolved, review what happened. What was the root cause? How quickly did you respond? What could be improved? Document lessons learned and update your processes accordingly.
Print this playbook. Keep it accessible. Run tabletop exercises at least once per quarter to ensure your team can execute it under pressure.
Monitoring for Vulnerabilities Proactively
A reactive PSIRT — one that only responds when someone knocks on the door — is not sufficient under the CRA. You are expected to actively monitor the security status of your products throughout their support period.
Monitor CVE databases. Subscribe to feeds from the National Vulnerability Database (NVD) and other relevant sources. When a new CVE is published for a component you use, you should know about it within hours, not weeks.
Track your open-source dependencies. Most modern products are built on open-source components, and most CRA-relevant vulnerabilities will originate from your supply chain rather than from original research against your proprietary code. Subscribe to security advisories for the libraries and frameworks you depend on. GitHub Security Advisories, the OSV database, and vendor-specific mailing lists are key sources.
Use SBOM-based vulnerability scanning. This is where your Software Bill of Materials becomes operationally critical. By maintaining an accurate, up-to-date SBOM for each product, you can automatically cross-reference your component inventory against known vulnerability databases. When a new CVE drops for a library you use, an automated scan flags it immediately.
Integrate monitoring into your development pipeline. Do not treat vulnerability monitoring as a separate, manual activity. Embed it into your CI/CD process so that every build is checked against the latest vulnerability data. This catches issues before they ship rather than after.
The reality is that the majority of vulnerabilities affecting your products will come from upstream components — an OpenSSL update, a Linux kernel patch, a flaw in a third-party SDK. Proactive monitoring is the only way to stay ahead of these.
Tools and Resources
You do not need to build everything from scratch. The security community has developed extensive frameworks and standards for vulnerability handling.
- FIRST.org PSIRT Services Framework: A comprehensive guide to establishing and operating a PSIRT, published by the Forum of Incident Response and Security Teams. It covers governance, intake, triage, remediation, and disclosure in detail.
- ISO/IEC 30111: The international standard for vulnerability handling processes. It defines the workflow from receipt through resolution and disclosure.
- ISO/IEC 29147: The international standard for vulnerability disclosure. It covers how to communicate with reporters and the public.
- ENISA single reporting platform: The designated channel for CRA vulnerability and incident notifications. Familiarize your team with the platform before a real incident forces you to learn it under pressure.
- CVSS Calculator (FIRST.org): Use this to consistently score vulnerability severity across your team.
- VINCE (CERT/CC): A free platform for coordinating multi-party vulnerability disclosure, useful when a vulnerability affects your product and others sharing the same component.
Seentrix provides automated vulnerability monitoring and SBOM scanning that feeds directly into your PSIRT process. By continuously scanning your product's component inventory against known vulnerability databases, Seentrix helps you identify affected products within minutes of a new CVE publication — giving your PSIRT the early warning it needs to meet CRA reporting timelines.
Starting Small
If this guide feels overwhelming, take a breath. You do not need everything on day one. The CRA requires a process, not perfection. What matters is that you have a functioning, documented capability that demonstrates you are handling vulnerabilities systematically.
Here is where to start:
- Publish a security.txt file. This takes 15 minutes and immediately signals to researchers that you accept vulnerability reports.
- Set up a security@ email address. Route it to one or two responsible people. Monitor it daily.
- Create a simple intake spreadsheet. Track report date, reporter, affected product, severity, status, and resolution date. You can move to dedicated tooling later.
- Assign one person as responsible. Give them the authority and the time to own this process. Even a few hours per week is a start.
- Write a one-page vulnerability disclosure policy. Publish it on your website. Tell the world how to report issues to you.
Then grow from there. Add CVSS-based triage. Document your playbook. Integrate SBOM scanning. Set up ENISA reporting templates. Each step makes your process more robust, and each step brings you closer to full CRA compliance.
The manufacturers who will navigate the CRA successfully are not the ones with the biggest security budgets. They are the ones who start early, build incrementally, and treat product security as a core business function rather than an afterthought.
Start today. Your first PSIRT does not need to be perfect. It needs to exist.
Related posts
CRA Risk Assessment: A Step-by-Step Walkthrough of Article 13(3)
The Article 13(3) risk assessment is the backbone of the entire CRA technical file. Here is how to produce one from first principles, structured the way an auditor expects to read it.
Technical Documentation Under CRA Annex VII: A Working Contents List for Your Product File
The technical documentation file is where a market-surveillance audit lives or dies. Here is what Annex VII actually requires, organised as a working contents list you can copy into a folder structure today.