# Delete Asset Source: https://docs.projectdiscovery.io/api-reference/assets/delete-asset delete /v1/assets/{asset_Id} Delete asset by ID # Get Asset Content Source: https://docs.projectdiscovery.io/api-reference/assets/get-asset-content get /v1/assets/{asset_id}/contents Get user asset content # Get Asset Metadata Source: https://docs.projectdiscovery.io/api-reference/assets/get-asset-metadata get /v1/assets/{asset_Id} Get asset metadata # Update Asset Content Source: https://docs.projectdiscovery.io/api-reference/assets/update-asset-content patch /v1/assets/{asset_id}/contents Update existing asset content # Upload Asset Source: https://docs.projectdiscovery.io/api-reference/assets/upload-asset post /v1/assets Manually upload user assets (uploaded to manual enumeration) # Add Config Source: https://docs.projectdiscovery.io/api-reference/configurations/add-config post /v1/scans/config Add a new scan configuration # Add excluded templates Source: https://docs.projectdiscovery.io/api-reference/configurations/add-excluded-templates post /v1/scans/config/exclude Add excluded templates # Delete Config Source: https://docs.projectdiscovery.io/api-reference/configurations/delete-config delete /v1/scans/config/{config_id} Delete scan configuration # Delete excluded template ids Source: https://docs.projectdiscovery.io/api-reference/configurations/delete-excluded-template-ids delete /v1/scans/config/exclude Delete excluded template ids # Get Config Source: https://docs.projectdiscovery.io/api-reference/configurations/get-config get /v1/scans/config/{config_id} Get a scan configuration # Get Configs List Source: https://docs.projectdiscovery.io/api-reference/configurations/get-configs-list get /v1/scans/config Get user scan configurations list # Get excluded templates Source: https://docs.projectdiscovery.io/api-reference/configurations/get-excluded-templates get /v1/scans/config/exclude Get excluded templates # Update Config Source: https://docs.projectdiscovery.io/api-reference/configurations/update-config patch /v1/scans/config/{config_id} Update existing scan configuration # Get elogs of given scan id Source: https://docs.projectdiscovery.io/api-reference/elog/get-elogs-of-given-scan-id get /v1/scans/{scan_id}/error_log # Create Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/create-enumeration post /v1/asset/enumerate Create a new enumeration # Delete Bulk Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-assets-in-bulk delete /v1/asset/enumerate Delete enumeration by enumerate ids # Delete Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-enumeration delete /v1/asset/enumerate/{enumerate_id} Delete enumeration by enumerate_id # Delete Enumeration Schedule Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-enumeration-schedule delete /v1/enumeration/schedule Delete a re-scan schedule # Export Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/export-enumeration get /v1/asset/enumerate/{enum_id}/export Export enumeration content # Export Enumeration of user Source: https://docs.projectdiscovery.io/api-reference/enumerations/export-enumeration-of-user get /v1/asset/enumerate/export Export enumeration content # Get All Enumeration Contents Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-all-enumeration-contents get /v1/asset/enumerate/contents Get All enumeration content # Get all enumeration stats Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-all-enumeration-stats get /v1/asset/enumerate/stats # Get Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration get /v1/asset/enumerate/{enumerate_id} Get enumeration by enumerate_id # Get enumeration config Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-config get /v1/asset/enumerate/{enumerate_id}/config # Get Enumeration Contents Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-contents get /v1/asset/enumerate/{enumerate_id}/contents Get enumeration content by enumerate_id # Get Enumeration List Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-list get /v1/asset/enumerate Get enumeration list # Get Enumeration Schedules Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-schedules get /v1/enumeration/schedule Get enumeration re-scan schedule # Get enumeration stats Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-stats get /v1/asset/enumerate/{enumerate_id}/stats # Group assets by filters Source: https://docs.projectdiscovery.io/api-reference/enumerations/group-assets-by-filters get /v1/asset/enumerate/filters # Group assets by filters for an enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/group-assets-by-filters-for-an-enumeration get /v1/asset/enumerate/{enumerate_id}/filters # Rescan Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/rescan-enumeration post /v1/asset/enumerate/{enumerate_id}/rescan Re-run a existing enumeration # Set Enumeration Schedule Source: https://docs.projectdiscovery.io/api-reference/enumerations/set-enumeration-schedule post /v1/enumeration/schedule Set enumeration re-scan frequency # Stop Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/stop-enumeration post /v1/asset/enumerate/{enumerate_id}/stop Stop a running enumeration # Update Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/update-enumeration patch /v1/asset/enumerate/{enumerate_id} Update enumeration by enumerate_id # Get audit logs for team Source: https://docs.projectdiscovery.io/api-reference/get-audit-logs-for-team get /v1/team/audit_log # Cloud API Reference Introduction Source: https://docs.projectdiscovery.io/api-reference/introduction Details on the ProjectDiscovery API ## Overview The ProjectDiscovery API v1 is organized around [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer). Our API has resource-oriented URLs, accepts and returns JSON in most cases, and the API uses standard HTTP response codes, authentication, and verbs. Our API also conforms to the [OpenAPI Specification](https://www.openapis.org/). This API documentation will walk you through each of the available resources, and provides code examples for `cURL`, `Python`, `JavaScript`, `PHP`, `Go` and `Java`. Each endpoint includes the required authorization information and parameters, and provides examples of the response you should expect. ## Authentication The ProjectDiscovery API uses API keys to authenticate requests. You can view and manage your API key in ProjectDiscovery at [https://cloud.projectdiscovery.io/](https://cloud.projectdiscovery.io/) under your user information. Authentication with the API is performed using a custom request header - `X-Api-Key` - which should simply be the value of your API key found with your ProjectDiscovery account. You must make all API calls over `HTTPS`. Calls made over plain HTTP will fail, as will requests without authentication or without all required parameters. ## Resources Below (and in the menu on the left) you can find the various resources available to the ProjectDiscovery API. Your assets (hosts, CIDR ranges, etc.) for scanning. Access public and private templates as well as AI template creation. Manage scans, scan schedules, and create new scans. See and manage vulnerabilities detected by PDCP. Retest vulnerabilities or run single template/target scans. See and manage user settings, API keys and more. # Get All Results Source: https://docs.projectdiscovery.io/api-reference/results/get-all-results get /v1/scans/results Get scans results of a user # Get Results Stats Source: https://docs.projectdiscovery.io/api-reference/results/get-results-stats get /v1/scans/results/stats Get user scan results stats # Get Scan Results Source: https://docs.projectdiscovery.io/api-reference/results/get-scan-results get /v1/scans/result/{scanId} get results of specific scan by id # Get Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/results/get-scan-vulnerability get /v1/scans/vuln/{vuln_id} Get scan result vulnerability by ID # Get Scans Result Filters Source: https://docs.projectdiscovery.io/api-reference/results/get-scans-result-filters get /v1/scans/results/filters Get users scan-result filters # Get scan log of given scan id Source: https://docs.projectdiscovery.io/api-reference/scan_log/get-scan-log-of-given-scan-id get /v1/scans/{scan_id}/scan_log # Create Scan Source: https://docs.projectdiscovery.io/api-reference/scans/create-scan post /v1/scans Trigger a scan # Create vulns export to tracker Source: https://docs.projectdiscovery.io/api-reference/scans/create-vulns-export-to-tracker post /v1/scans/vulns/{vuln_id}/ticket Create vulns export to tracker # Delete Scan Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan delete /v1/scans/{scan_id} Delete a scan using scanId # Delete Bulk Scans Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-in-bulk delete /v1/scans Delete multiple scans using scan ids # Delete Scan Schedule Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-schedule delete /v1/scans/schedule Delete scan schedule for a user # Delete Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-vulnerability delete /v1/scans/vulns Batch Delete scan vulnerability # Export Filtered Scan Source: https://docs.projectdiscovery.io/api-reference/scans/export-filtered-scan post /v1/scans/{scan_id}/export Export filtered scan results # Export Scan Source: https://docs.projectdiscovery.io/api-reference/scans/export-scan get /v1/scans/{scan_id}/export Export scan results # Export Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/export-scan-vulnerability get /v1/scans/vuln/{vuln_id}/export Export a specific scan vulnerability # Get All Scan Stats Source: https://docs.projectdiscovery.io/api-reference/scans/get-all-scan-stats get /v1/scans/stats Get all scans statistics for a user # Get All Scans History Source: https://docs.projectdiscovery.io/api-reference/scans/get-all-scans-history get /v1/scans/history Get user scan history details # Get Scan Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan get /v1/scans/{scan_id} Get details of a scan by scan ID # Get Scan Config Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-config get /v1/scans/{scan_id}/config Get scan metadata config # Get Scan History Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-history get /v1/scans/{scanId}/history Get scan history detial by scanId # Get Scan IPs Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-ips get /v1/scans/scan_ips Get list of static IPs used for scan # Get Scan List Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-list get /v1/scans Get user scans status # Get Scan Schedules Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-schedules get /v1/scans/schedule Get scan schedules for a user # Get Scans Token Source: https://docs.projectdiscovery.io/api-reference/scans/get-scans-token get /v1/scans/token Get user scan token usage details # Import OSS Scan Source: https://docs.projectdiscovery.io/api-reference/scans/import-oss-scan post /v1/scans/import Import scan details # Rescan scan Source: https://docs.projectdiscovery.io/api-reference/scans/rescan-scan post /v1/scans/{scan_id}/rescan Re-run a existing scan # Retest vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/retest-vulnerability post /v1/scans/{vuln_id}/retest Retest a scan vulnerability # Set Scan Schedule Source: https://docs.projectdiscovery.io/api-reference/scans/set-scan-schedule post /v1/scans/schedule set a scan schedule for a user # Stop Scan Source: https://docs.projectdiscovery.io/api-reference/scans/stop-scan post /v1/scans/{scan_id}/stop Stop a running scan, not applied in any other state. # Update Imported Scan Source: https://docs.projectdiscovery.io/api-reference/scans/update-imported-scan patch /v1/scans/{scan_id}/import Import more results to a given scan # Update Scan Source: https://docs.projectdiscovery.io/api-reference/scans/update-scan patch /v1/scans/{scan_id} Update scan metadata # Update Scan Config Source: https://docs.projectdiscovery.io/api-reference/scans/update-scan-config patch /v1/scans/{scan_id}/config Update scan metadata config # Update Vulnerability Labels Source: https://docs.projectdiscovery.io/api-reference/scans/update-vulnerability-labels patch /v1/scans/vulns/labels Batch update vulnerability labels # Update Vulnerability Status Source: https://docs.projectdiscovery.io/api-reference/scans/update-vulnerability-status patch /v1/scans/vulns Batch update vulnerability status # Create Template Source: https://docs.projectdiscovery.io/api-reference/templates/create-template post /v1/template Create a private template # Delete Template Source: https://docs.projectdiscovery.io/api-reference/templates/delete-template delete /v1/template/{template_id} Delete private template using ID # Generate AI Template Source: https://docs.projectdiscovery.io/api-reference/templates/generate-ai-template post /v1/template/ai Generate a private template with AI Engine # Get Early Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-early-template get /v1/template/early/{id} Get early template text # Get Early Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-early-template-list get /v1/template/early Get pdcp early template lists # Get Github Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-github-template get /v1/template/github/{id} Get github template text # Get Github Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-github-template-list get /v1/template/github List of all user's github templates # Get Public Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-public-template get /v1/template/public/* Get public template text using path # Get Public Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-public-template-list get /v1/template/public Get public-template list # Get Public Template Stats Source: https://docs.projectdiscovery.io/api-reference/templates/get-public-template-stats get /v1/template/stats Get public template statistics # Get Share Status Source: https://docs.projectdiscovery.io/api-reference/templates/get-share-status get /v1/template/share Get template sahred status (shared-with-link) # Get Shared Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-shared-template get /v1/template/share/{template_id} Get a shared template text # Get Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-template get /v1/template/{template_id} Get private template text using ID # Get Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-template-list get /v1/template Get user private(my) templates # Share Template Source: https://docs.projectdiscovery.io/api-reference/templates/share-template post /v1/template/share Share a private template (shared-with-link) # Update Template Source: https://docs.projectdiscovery.io/api-reference/templates/update-template patch /v1/template Update existing private template # Update enumeration config Source: https://docs.projectdiscovery.io/api-reference/update-enumeration-config patch /v1/asset/enumerate/{enumerate_id}/config # Create API Key Source: https://docs.projectdiscovery.io/api-reference/users/create-api-key post /v1/user/apikey Create user api-key, this won't create a new api-key if it already exists. # Delete API Key Source: https://docs.projectdiscovery.io/api-reference/users/delete-api-key delete /v1/user/apikey Delete user api-key # Get API Key Source: https://docs.projectdiscovery.io/api-reference/users/get-api-key get /v1/user/apikey Get user api-key # Get User Profile Source: https://docs.projectdiscovery.io/api-reference/users/get-user-profile get /v1/user Get user profile and permissions # Rotate API Key Source: https://docs.projectdiscovery.io/api-reference/users/rotate-api-key post /v1/user/apikey/rotate Rotate user api-key # Settings & Administration Source: https://docs.projectdiscovery.io/cloud/admin Review administrative, team, and account settings ## Summary This guide covers general account administration under settings in our cloud platform. These administrative and system settings include details about your account, team settings for administrators, and password/2FA. If you have questions about settings that are not covered here, or functionality that you think would be helpful - [get in touch.](/help) For details on other settings check out the guides for those features. * [Scanning](/cloud/scanning/overview) * [Assets](/cloud/assets/overview) * [Templates](/cloud/editor/overview) ## Settings [Profile settings](https://cloud.projectdiscovery.io/settings) are available from the global navigation under your sign-in (top right) for access to your Profile, Team, Scan IPs and more. ## Profile Profile displays your username, email address, and the option to delete your account. *Note: The ability to update these profile components will be available in a future release.* ## Team Under **Settings → Team** all users can view team settings. Users with the appropriate permissions can also modify team settings and manage team members. View or update team names, manage team members, and delete teams (supported for team owners) * Use **Create Team** to create a new team (up to 2 for Pro Tier). * Modify team settings by selecting a team from the global navigation. To modify team settings select a team from the global navigation to display those team settings. ### User Types ProjectDiscovery supports four types of users with the following permissions: * Owner: Read, write, invite, billing * Admin: Read, write, invite * Member: Read, write * Viewer: Read ### Managing Teams Teams can be created by Pro and Custom tier users. A Pro subscription supports up to two teams with 10 members. For a larger quantity of teams, or a greater number of members get in touch about a Custom tier configuration. ## Scan IPs Add Static IPs for greater control over your infrastructure scanning. ## Billing Purchase, view, or modify your subscription. A subscription to our Pro tier starts at \$250/month for scanning of up to 1000 unique assets. Additional upgrade options are also available with higher monthly asset limits - reach out to us with any questions about a custom contract. ## Security (Account Security) Use Security to update your password or to enable 2-factor authentication. * **Password** creates an account password that provides a login with your email (username) and password, as an alternative to using a linked account for login. These credentials will not replace any existing login configurations (for example:GitHub) * **Two-step authentication** provides additional authentication for your account with an authenticator application. # Audit Logs Source: https://docs.projectdiscovery.io/cloud/admin/audit-logs Track and monitor all user activities and system events across your organization Audit Logs are available exclusively for Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to learn more about Enterprise features. ProjectDiscovery's Audit Logs provide comprehensive visibility into all user activities and system events within your organization's ProjectDiscovery Cloud environment. The audit logging system captures detailed information about every significant action, including user logins, asset modifications, scan initiations, configuration changes, and API access events. Each log entry contains essential metadata such as the timestamp, user identity, IP address, action type, and affected resources, enabling security teams to maintain complete accountability and traceability. The audit logging interface presents events in a chronological timeline, with advanced filtering capabilities that allow you to search and analyze specific types of activities. Security administrators can filter logs based on multiple parameters including time ranges, user identities, action types, and affected resources. This granular filtering helps during security investigations, compliance audits, or when tracking specific changes across your organization's security workflows. From a security operations perspective, the audit logs serve as a crucial tool for detecting unusual patterns or potentially unauthorized activities. For instance, you can identify unusual scan patterns, track template modifications, or monitor API key usage across your organization. The system retains audit logs for an extended period, ensuring you have historical data available for compliance requirements or security investigations. Integration capabilities allow you to export audit logs to your existing security information and event management (SIEM) systems through our API. This enables you to incorporate ProjectDiscovery activity data into your broader security monitoring and alerting workflows. The audit log data can be particularly valuable during incident response scenarios, providing a clear timeline of events and actions leading up to or following a security event. For organizations with compliance requirements, our audit logs help demonstrate adherence to various security frameworks and regulations. The comprehensive logging of user actions, access patterns, and system changes provides the necessary documentation for security audits and compliance reviews. Each log entry is immutable and cryptographically signed, ensuring the integrity of your audit trail. # SAML SSO Source: https://docs.projectdiscovery.io/cloud/admin/saml-sso Enterprise Single Sign-On (SSO) integration for secure team access SAML SSO is available exclusively for Pro (as an add-on) and Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to enable SAML SSO for your organization. ProjectDiscovery supports Enterprise Single Sign-On (SSO) through SAML 2.0, enabling seamless and secure authentication using your organization's Identity Provider (IdP). Our SAML implementation is powered by Clerk, providing robust support for major identity providers including: * Microsoft Azure AD * Google Workspace * Okta Workforce * Custom SAML Providers ## Implementation Process SAML SSO setup requires manual configuration and verification by the ProjectDiscovery team to ensure secure implementation. Here's what to expect: 1. **Initial Setup Request** * After purchasing a Pro plan with SSO add-on or Enterprise contract * The ProjectDiscovery team will reach out to begin the configuration process * You'll be assigned a dedicated technical contact for the setup 2. **Configuration Steps** * Provide your IdP metadata and certificates * Configure allowed domains and user attributes * Set up SAML assertion mapping * Test the integration in a staging environment 3. **Verification & Go-Live** * Validate user provisioning and authentication * Confirm security settings and access controls * Enable the integration for production use ## Supported Features Our SAML integration includes comprehensive enterprise-grade features: * **Automated User Provisioning** * Just-in-Time (JIT) user creation * Attribute mapping for user profiles * Role and permission synchronization * **Security Controls** * Domain-based access restrictions * Enforced SSO for specified domains * Session management and timeout settings * **Advanced Options** * Support for IdP-initiated SSO * Multi-factor authentication integration * Custom attribute mapping ## Important Notes * SAML SSO setup requires manual configuration due to its security-critical nature * The setup process typically takes 1-2 business days * All configurations are thoroughly tested before production deployment * Changes to SAML settings may require ProjectDiscovery team assistance * Existing users can be migrated to SSO authentication seamlessly ## Getting Started To enable SAML SSO for your organization: 1. Ensure you have a Pro plan with SSO add-on or Enterprise contract 2. Contact your account representative or [sales team](https://projectdiscovery.io/request-demo) 3. Prepare your IdP configuration details 4. Schedule a setup call with our technical team Our team will guide you through the entire process, ensuring a secure and successful implementation of SAML SSO for your organization. # Scan IPs for Whitelisting Source: https://docs.projectdiscovery.io/cloud/admin/scan-ips Configure and manage scanning IP addresses for enterprise security controls Dedicated Scan IPs are available exclusively for Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to learn more about Enterprise features. ProjectDiscovery's Enterprise scanning infrastructure operates from a dedicated set of static IP addresses, enabling organizations to implement precise security controls and whitelisting policies. These fixed IP ranges are exclusively assigned to your organization's scanning activities, providing consistent and identifiable sources for all security assessments conducted through the platform. This dedicated IP infrastructure ensures that your security teams can easily distinguish ProjectDiscovery's legitimate scanning traffic from potential unauthorized scanning attempts. When configuring your security infrastructure to accommodate ProjectDiscovery scans, you can whitelist these specific IP addresses in your firewalls, Web Application Firewalls (WAFs), or Intrusion Prevention Systems (IPS). This whitelisting approach allows you to maintain strict security controls while ensuring uninterrupted vulnerability scanning operations. The platform provides both IPv4 and IPv6 addresses, supporting organizations with diverse network configurations and compliance requirements. Enterprise customers can customize scanning behavior on a per-IP basis, including the ability to set specific rate limits, configure custom headers, or assign particular IPs to different types of scans. This granular control helps organizations optimize their scanning operations while maintaining compliance with internal security policies. For instance, you might assign certain IPs for external asset discovery while reserving others for intensive vulnerability scanning, ensuring proper resource allocation and traffic management. The platform includes monitoring and analytics for scan traffic from these IPs, providing visibility into scanning patterns, bandwidth usage, and potential scanning issues. This monitoring helps security teams optimize their scanning strategies and troubleshoot any connectivity or performance problems. Additionally, if any of your security systems flag scanning activity from these IPs, you can quickly verify the legitimacy of the traffic against your assigned IP ranges. For organizations operating in regulated environments or with strict security requirements, our dedicated IP infrastructure provides the necessary isolation and control. Each scanning IP is documented and can be included in security compliance documentation, making it easier to demonstrate proper security controls during audits. The platform also supports custom DNS resolution and proxy configurations when needed for specialized scanning scenarios. # Adding Assets Source: https://docs.projectdiscovery.io/cloud/assets/adding-assets Learn how to add and manage assets in ProjectDiscovery ## Overview Assets in our cloud platform can be any hosts you want to monitor - URLs, IP addresses, or CIDR ranges. There are three primary methods to add assets: Automatically discover and monitor assets from your root domains Connect cloud providers to import and sync assets automatically Programmatically add and manage assets using our REST API ## Asset Discovery The fastest way to get started is through our asset discovery feature: 1. Navigate to **Assets → Add New Assets** 2. Enter root domains/CIDR/IPs based on your plan: ✓ Up to 10 root domains only ✓ Basic subdomain discovery ✓ HTTP probing ✓ Basic technology detection ✓ Limited cloud asset discovery ✓ Up to 100 root domains ✓ Advanced subdomain enumeration ✓ Port scanning (Top 1000 ports) ✓ Deep technology fingerprinting ✓ Cloud integration ✓ Historical data tracking ✓ Custom discovery schedules ✓ CIDR range scanning ✓ IP block discovery ✓ Network perimeter mapping ✓ Custom limits ✓ Advanced asset enrichment ✓ Advanced cloud correlation ✓ Custom enrichment rules ✓ Dedicated discovery nodes ✓ Priority asset updates ✓ ASN-based discovery ✓ Certificate chain analysis ✓ Subsidiary discovery ✓ Related domain correlation ✓ Company hierarchy mapping ✓ Acquisition tracking Discovery features can be customized for Enterprise plans. Contact our [sales team](mailto:sales@projectdiscovery.io) for custom requirements. # Custom & Bulk Asset Labeling Source: https://docs.projectdiscovery.io/cloud/assets/custom-labeling Create and manage custom labels for your assets with powerful bulk labeling capabilities Custom Labels in ProjectDiscovery Cloud are user-defined tags that you can manually assign to any discovered asset. This feature works alongside the automatic, AI-driven labels that the platform generates. While the system's AI assigns labels for website types (e.g., API docs, internal apps, login pages, admin panels) and environments (e.g., production, staging, internal) by default, custom labels give you the flexibility to define your own categories and classifications for assets. In other words, you're not limited to the auto-generated labels – you can tag assets with labels that make sense for your organization's context (such as project names, owner teams, sensitivity, or any internal naming scheme). ### How They Work Using the ProjectDiscovery Cloud interface, a user can select an asset and assign one or more custom labels to it. These labels then appear alongside the asset in the inventory, just like the AI-generated labels. This manual labeling is valuable for capturing contextual information that automated methods might not know. For example, you might label certain assets as "Critical" if they pertain to core infrastructure, or tag a set of hosts as "Internal" if they should not be exposed to the internet. By labeling assets in a way that mirrors your environment and business, you ensure that important attributes of each asset are immediately visible. ### Benefits Custom labels allow security teams to organize assets according to custom criteria and quickly spot key asset properties at a glance. This user-driven categorization adds an extra layer of context – teams gain full control over how assets are categorized. It becomes easier to filter and group assets based on these tags (for example, viewing all assets labeled "Internal" or "Web-Server"). Ultimately, this leads to better asset management as the platform helps classify results to help you better organize, contextualize, and prioritize your assets. In practice, custom labels enable workflows like separating production vs. staging assets or flagging high-risk systems, so that teams can focus on relevant subsets of the attack surface during monitoring and scanning. ## Bulk Labeling ProjectDiscovery Cloud also supports Bulk Labeling, which lets users apply a label to many assets at once, rather than tagging each asset individually. This feature is implemented through the platform's powerful filtering system. Users can filter their asset list by specific criteria and then assign a label to all assets matching that filter in a few clicks. In effect, bulk labeling dramatically speeds up the process of categorizing large numbers of assets. ### How It Works The platform provides filtering across 14+ attributes of assets – you can narrow results by things like port number, technology, domain, IP, content length, and even by existing labels. Here's how to create and save bulk labels: 1. **Apply Filters** * Navigate to the Assets view * Click the "Filter" button in the top left * Select your desired filter criteria (e.g., port, technology, domain) * Apply multiple filters to refine your selection 2. **Select Assets** * After filtering, review the matching assets 3. **Apply Labels** * Click the "Label" button in the action bar * Enter your label name or select from existing labels * Click "Apply" to tag all selected assets 4. **Save as Dynamic Group** (Optional) * Click "Save Filter" in the top right * In the pop-up dialog, enter a name for your dynamic group * Click "Save" to create your dynamic group Your saved dynamic group will automatically update as new assets matching your filter criteria are discovered. For example, you could label all assets running on port 8088 as 'staging' in just a few clicks. This bulk tagging via filters approach means you don't have to manually edit each asset entry – the system streamlines it for you. ### Advantages Bulk labeling is especially useful for applying environment or role labels to many assets simultaneously. It ensures consistency at scale – every asset meeting the criteria gets the exact same label, avoiding omissions or typos that might happen with one-by-one tagging. It's also a huge time-saver for large asset sets; teams can categorize hundreds or thousands of assets in seconds by leveraging filters, instead of minutes or hours. By making it easy to tag assets in bulk, ProjectDiscovery helps teams maintain an organized asset inventory even as new data pours in. ## Use Cases and Workflow Integration Both custom labels and bulk labeling open up new use cases for integrating ProjectDiscovery into security team workflows: ### Environment Segmentation Teams can mark assets by environment (e.g., Development, Staging, Production) using custom labels. Bulk labeling makes it easy to apply these environment tags en masse. For example, filtering by port 8088 and tagging those assets as "staging" is a quick way to group all staging assets. This segmentation allows different handling of assets based on environment – for instance, running more frequent scans on production assets or applying stricter monitoring to internal-only systems. ### Technology or Port-based Grouping If many assets share a common attribute (such as a specific open port, technology, or domain pattern), you can filter them out and label them in bulk. For instance, label all assets running an outdated software version as "Legacy" or all assets on port 22 as "SSH-Servers." This practice helps in quickly identifying groups of assets that might require a specific security assessment or patching regimen. The filtering system supports multi-select and complex queries (e.g., all assets on either Nginx or Apache) to refine these groups. ### Dynamic Asset Groups for Monitoring After labeling assets, those labels can be used to create saved views or dynamic subgroups in the platform. A dynamic subgroup is essentially a saved filter that updates automatically as assets change. For example, once you've labeled certain assets as "Critical", you could save a filter for `label = Critical`. As new assets get tagged with "Critical" (either through AI suggestions or manual labeling), they will automatically appear in that group. This is highly useful for workflows like continuous monitoring or targeted vulnerability scanning – you always have an up-to-date list of assets in that category without rebuilding queries. ### Prioritization and Triage Custom labels can encode business context such as ownership (e.g., tagging an asset with the responsible team or project name) or criticality (e.g., High-Value, Low-Impact). Using bulk operations, a newly onboarded set of assets can quickly be labeled according to input from asset owners or CMDB data. Thereafter, security analysts can filter by these labels to prioritize issues. For example, during incident response or risk review, one might focus on assets labeled "Production" and "Customer-Facing" first, since an issue on those could be more severe. # AI-Powered Asset Labeling Source: https://docs.projectdiscovery.io/cloud/assets/labeling Automatically categorize and contextualize your assets with AI-driven labeling Asset labeling is currently in early beta and operates asynchronously. The initial labeling process may take some time as we optimize performance. We're actively working on speed improvements to make this process faster and more efficient. **Asset labeling** is the automated process of categorizing and contextualizing the assets discovered by ProjectDiscovery. Instead of presenting you with a raw list of domains or IPs, the platform intelligently **classifies assets** by attaching descriptive labels or tags to each one. These labels provide immediate context about what an asset is – for example, distinguishing a marketing website from an API endpoint or identifying a development server versus a production system. By automatically organizing assets into meaningful categories, asset labeling helps security teams understand their attack surface at a glance and focus on what matters most. In practical terms, once ProjectDiscovery discovers an asset, it will evaluate that asset's characteristics and assign labels that describe its role or nature. For instance, a web application login page might be labeled as a "Login Portal," or a host with a name like *staging.example.com* might get tagged as "Staging Environment" to indicate it's not a production system. Asset labeling bridges the gap between raw asset data and the business context behind those assets, making your asset inventory more informative and easier to navigate. ## How It Works ProjectDiscovery's asset labeling engine classifies assets by analyzing various pieces of information collected during discovery. It uses a combination of asset metadata, DNS information, HTTP responses, and even screenshots to determine how to label each asset: * **Asset Metadata:** Basic details about the asset (such as IP addresses, open ports, SSL certificate data, and hosting information) are examined for clues. For example, an SSL certificate's Common Name might reveal the application's name, or an IP's ASN could indicate the cloud provider or organization owning the asset. This metadata helps identify what the asset might be (e.g., a cloud storage bucket, a VPN gateway, etc.) and adds context for labeling. * **DNS Records:** DNS information is used to infer the asset's purpose or ownership. The domain or subdomain names can be very telling. For instance, an asset under `dev.` or `staging.` subdomains suggests a non-production environment, whereas something like `mail.example.com` could indicate an email server. CNAME records might point to a known service (for example, a CNAME to a SaaS provider's domain), which the platform can recognize and label accordingly. In short, ProjectDiscovery looks at hostnames and DNS details to glean context (like environment, service type, or associated product) that inform the asset's label. * **HTTP Responses:** For web assets, the content and behavior of the HTTP(S) service are analyzed. The platform uses its HTTP probing capabilities to gather response headers, status codes, and page content. This includes looking at the HTML title, body text, and other fingerprints. Certain keywords or patterns can identify the application type – for example, a page title containing "Login" or a form with password fields likely indicates a login portal, while a default page saying "Welcome to nginx" indicates a generic web server instance. The system also detects technologies and frameworks running on the asset (e.g., identifying a WordPress site or an Apache server from response signatures) via deep technology fingerprinting. All this HTTP-derived information feeds into the labeling decision. * **Screenshots:** ProjectDiscovery can capture screenshots of discovered web services. These screenshots provide a visual snapshot of the asset's interface. In the asset labeling process, screenshots serve as an additional data point for understanding the asset. For example, a screenshot that shows a login screen or an admin panel UI is a strong indicator of the asset's function (even if the text wasn't conclusive). While the labeling at this beta stage is mostly driven by metadata and textual analysis, having a screenshot means that if automated logic doesn't perfectly categorize an asset, an analyst can quickly glance at the image and understand what the asset is. Behind the scenes, all these inputs are combined to assign one or multiple labels to the asset. The system uses a rules-based approach (and will continue to get smarter over time) to match patterns or signatures with label categories. For example, if an asset's DNS name contains "api" and the HTTP response returns JSON, a rule might label it as an "API Endpoint." Similarly, a host identified to be running Jenkins (via tech fingerprinting of HTTP response) might get a label like "Jenkins CI" to denote it's a CI/CD service. Each label is essentially a quick descriptor that summarizes an aspect of the asset, allowing you to immediately understand its nature without deep manual investigation. ## Benefits of Automated Labeling Automated asset labeling brings several advantages to security professionals and engineers managing a large number of assets: * **Reduces Manual Effort:** One of the biggest benefits is cutting down the tedious work of labeling assets by hand. In the past, teams might maintain spreadsheets or use tagging systems to mark which assets are production, which are internal, which belong to a certain team, etc. ProjectDiscovery's automated approach does this heavy lifting for you. As soon as assets are discovered, the platform annotates them with relevant labels, sparing you from examining each asset individually and typing out tags. This automation frees up your time to focus on higher-value tasks like analyzing findings or improving security controls. * **Speeds Up Security Triage:** With assets automatically categorized, you can prioritize and triage security issues faster. When a new vulnerability or incident is reported, having labeled assets means you instantly know the context. For example, if an alert comes in for *api.test.example.com*, an "API" label and perhaps a "Staging" label on that asset will tell you it's a staging API server. You can then decide the urgency (maybe lower than a production issue) and the appropriate team to notify. Without having to dig for this information, response times improve. In short, labels act as immediate context clues that help you quickly determine the criticality of an asset and the impact of any associated vulnerabilities. * **Better Asset Management & Organization:** Asset labels make it much easier to organize and filter your asset inventory. You can group assets by their labels to get different views of your attack surface. For instance, you might filter to see all assets labeled "Production" to ensure you're focusing scans and monitoring on live customer-facing systems, or you might pull up all assets labeled "Login Portal" to review authentication points in your infrastructure. This capability turns a flat list of assets into a richly organized dataset that can be sliced and diced for various purposes. It enhances visibility across your environment – you can quickly answer questions like "How many external login pages do we have?" or "Which assets are running database services?" if such labels are applied. Ultimately, this leads to more structured and efficient asset management. * **Consistency and Scale:** Automated labeling applies the same criteria uniformly across all assets, ensuring consistent classification. Human tagging can be subjective – different team members might label similar assets differently or overlook some assets entirely. With ProjectDiscovery doing it automatically, every asset is evaluated with the same logic, and nothing gets skipped due to oversight. This consistency is especially important when you have hundreds or thousands of assets in dynamic cloud environments. The feature scales effortlessly – no matter how many assets you discover overnight, each will get labeled without adding to anyone's workload. As your attack surface grows, automated labeling keeps the context up-to-date continuously, which is crucial for maintaining an accurate asset inventory in fast-changing environments. In summary, automated asset labeling streamlines asset management by eliminating manual tagging drudgery, accelerating the interpretation of asset data, and bringing order and clarity to your inventory. It's an efficiency boost that also improves the quality of your security posture by ensuring you always know what each asset is and why it's there. # Asset Discovery and Exposure Management Source: https://docs.projectdiscovery.io/cloud/assets/overview Next-generation attack surface management and asset discovery platform Attack Surface Management (ASM) has evolved from basic asset enumeration into a sophisticated process that continuously discovers, classifies, and monitors all assets vulnerable to attack. Modern organizations face ever‐expanding digital footprints spanning traditional internet-facing systems, dynamic cloud environments, and complex distributed services. ProjectDiscovery redefines ASM by combining proven open‑source techniques with advanced cloud‑native capabilities. This unified platform delivers instant insights—through a search‑like experience and deep reconnaissance—ensuring comprehensive coverage and real‑time visibility into your entire infrastructure. In essence, it lets your security team see your organization's attack surface as an attacker would, leaving no blind spots. This document outlines the core workflows and architectural components of ProjectDiscovery's ASM and Exposure Management. It is designed to help new users quickly understand how the system works and to provide a structured, yet developer‑friendly, overview for security and engineering teams. *** ## Platform Architecture Our next‑generation asset discovery platform is built on a revolutionary three‑layer architecture developed through extensive collaboration with hundreds of security teams. Each layer plays a distinct role in mapping and monitoring your infrastructure. ### 1. External Discovery Layer * **Instant Enumeration:** Leveraging our enhanced Chaos database, this layer delivers immediate results through pre‑indexed data for hundreds of thousands of domains. * **Deep Reconnaissance:** Active reconnaissance methods (advanced DNS brute‑forcing, permutation analysis, certificate transparency log monitoring) supplement instant results. * **ASN Mapping:** Sophisticated ASN correlation (ASNMap) uncovers hidden relationships by mapping IP ranges associated with your organization. This network‑level insight expands your visibility beyond known domains. * **Third‑Party Data & Subsidiary Discovery:** Integration with external sources (e.g., Shodan, Censys, FOFA) and subsidiary detection mechanisms automatically identify related brands and assets—ensuring that acquired or lesser‑known entities are not overlooked. ### 2. Cloud Integration Layer * **Real‑Time Cloud Asset Discovery:** Our enhanced Cloudlist engine connects natively with AWS, Azure, GCP, and more, continuously monitoring your cloud footprint. * **Service & Configuration Monitoring:** Advanced heuristics identify exposed services and risky configurations in real‑time, while persistent API connections ensure your cloud inventory stays up‑to‑date. * **Cross‑Cloud Correlation:** Cloud‑based assets are linked with ASN data and external discoveries to provide a unified view of your overall attack surface. ### 3. Asset Management Layer * **Enrichment & Classification:** Raw asset data is transformed through multi‑stage analysis. Comprehensive DNS analysis, HTTP probing (with screenshots and technology fingerprinting), and certificate evaluation work together to create detailed asset profiles. * **Automated Labeling:** AI‑powered models automatically categorize and tag assets based on their characteristics, behavior patterns, and risk profiles. Users can also define custom labels and apply bulk labeling to further organize assets by environment, ownership, or risk. * **Graph‑Based Relationship Mapping:** Advanced mapping visualizes complex asset relationships and attack paths, providing actionable intelligence for prioritizing security efforts. *** ## Key Workflows & Features Automatically discover and track all external-facing and internal assets using integrated tools like Subfinder, Naabu, Httpx, and more Organize assets with AI-generated and custom labels for efficient management and prioritization Capture visual snapshots of web assets for quick identification of exposed interfaces Automatically map and manage assets across multiple subsidiaries and brands Native integration with major cloud providers for comprehensive asset discovery Seamless integration with Nuclei-powered scanning for comprehensive security assessment *** ## Best Practices & Next Steps * **Enable Continuous Scanning:** Schedule regular asset discovery and vulnerability scans to ensure your inventory remains current. * **Leverage Labels Effectively:** Develop a consistent labeling scheme that reflects your organizational structure (e.g. by environment, department, or risk level) to prioritize remediation efforts. * **Integrate with Your Workflow:** Set up integrations with alerting systems (Slack, Teams, email) and ticketing tools (Jira, GitHub) to automate notifications and track remediation. * **Review & Update Regularly:** Periodically audit your asset inventory to remove stale entries and adjust labels as your infrastructure evolves. * **Explore Advanced Features:** Once you're comfortable with the basics, dive into additional features such as customized filtering, dynamic grouping, and deeper cloud integrations to further refine your exposure management. *** By following this guide, new users can quickly grasp the full capabilities of ProjectDiscovery's ASM and Exposure Management. The integrated workflows—from asset discovery and enrichment to continuous monitoring and vulnerability assessment—provide a robust, real‑time view of your infrastructure, empowering your security team to proactively secure your attack surface. Enjoy the streamlined, automated approach to managing your organization's exposure with ProjectDiscovery! # Asset Screenshots Source: https://docs.projectdiscovery.io/cloud/assets/screenshots Visual catalog of your discovered assets for quick security assessment The Screenshots feature is currently in beta and operates asynchronously. After asset discovery, there may be a delay before screenshots become available as they are processed in the background. This current limitation is temporary while we work on infrastructure optimizations to make screenshot generation instant. We are actively working on: * Reducing screenshot generation time * Implementing real-time processing * Scaling our infrastructure to handle concurrent screenshot requests * Making the feature more widely available to all users During the beta period, you may experience longer wait times for screenshots to appear in your dashboard. We appreciate your patience as we enhance this feature to provide instant visual insights for all users. The *Screenshots* feature automatically captures and catalogs visual snapshots of web assets identified during your discovery process. In practice, this means that for each discovered web service, an image of its web page is saved for you to review. These screenshots provide a quick visual summary of what was found, allowing you to identify interesting or anomalous web pages at a glance. All captured images are organized alongside asset data, so security teams can easily browse them without manually visiting each site. **How this helps:** By seeing the actual rendered pages, you can spot login portals, dashboards, error pages, or other telling visuals immediately. This added context enriches your asset inventory beyond raw URLs and metadata, giving you an at-a-glance understanding of each asset's interface and content. ## How It Works (Technical Process) Under the hood, the screenshot feature uses a headless browser to load each web page and take a snapshot of it. When asset discovery with screenshots is initiated, the system will launch a browser engine (Chrome in headless mode) to fully render the target page (including HTML, CSS, and JavaScript) before capturing the image. Because of this rendering step, screenshot generation is **resource-intensive** and **time-consuming**. Each page needs to load as if you opened it in a real browser, which introduces processing delays. In the current beta implementation, screenshots are taken **asynchronously**. This means the initial asset discovery can complete and return results before all screenshots are finished. The images will continue to be captured in the background and will appear in your asset catalog once ready. As a result, you might notice a gap between discovering an asset and seeing its screenshot. This is normal in the beta – the feature prioritizes completing the discovery process first, then works on rendering pages for snapshots. ## Why Use Screenshots? Traditionally, after discovering new web assets, security engineers would **manually inspect** each site to understand what it is. This might involve copying URLs into a browser or using separate tools to capture site images. For large numbers of assets, that manual approach is tedious and time‑consuming. Important details could be missed if an analyst doesn't have time to check every single site. The screenshots feature automates this **visual assessment** step. Instead of manually visiting dozens or hundreds of websites, the system automatically provides you with a gallery of each site's front page. This saves considerable time and effort – without automation, teams often had to write custom scripts (for example, using Selenium to take browser snapshots) or even rerun their discovery with a separate screenshot tool just to capture images. Now, that process is integrated: as soon as an asset is found, a screenshot is queued up for it. Security teams can quickly scroll through the captured images to triage assets, prioritize investigation, and spot anything visually unusual or interesting. In essence, **Screenshots turn a once-manual, one-by-one review into an automated, at-scale process**, letting you cover more ground faster. **Use case example:** If your discovery process finds an unknown subdomain hosting a login page, the screenshot will show you the login form and branding. This immediate context might tell you that the site is an admin portal, which is valuable information for risk assessment. Without the screenshot, you might have overlooked that subdomain or delayed investigating it until you could manually check it. By automating this, the feature ensures no discovered web asset goes visually unchecked. # Subsidiary & Multi-Organization Management Source: https://docs.projectdiscovery.io/cloud/assets/subsidiary Discover and manage assets across multiple organizations, subsidiaries, and brands Need advanced workflows or custom subsidiary management? Our team can help set up enterprise-grade configurations tailored to your infrastructure. [Talk to our team](https://projectdiscovery.io/request-demo) to discuss your specific requirements. Modern enterprises frequently have complex infrastructures spread across many domains and business units. ProjectDiscovery's platform is designed to give security teams **instant visibility into the entire organizational attack surface**, including assets belonging to subsidiaries, acquired companies, and separate brands. It does so by automating asset discovery and correlation on a global scale. The platform acts as a centralized inventory where all web properties, cloud resources, and external facing systems tied to an organization are cataloged together, regardless of which subsidiary or team they belong to. ProjectDiscovery built its cloud platform with **end-to-end exposure management workflows** that continuously discover assets and monitor them in real-time. This means as your organization grows – launching new websites, spinning up cloud services, or acquiring companies – the platform automatically updates your asset inventory and keeps track of new potential entry points. In short, ProjectDiscovery provides a *"single pane of glass"* for enterprise security teams to oversee multi-organization infrastructures. ## Challenges in Traditional Subsidiary Asset Discovery Tracking assets across multiple organizations or subsidiaries is notoriously difficult when done manually. Security teams traditionally had to compile lists of subsidiary domains and networks from internal knowledge or public records, then run separate scans for each – a time-consuming and error-prone process. Some common challenges include: * **Incomplete Visibility:** Large organizations might have dozens of subsidiaries or brand domains, and each may host numerous applications. Manually mapping all these entities is a huge challenge. In practice, many enterprises have "hundreds or even thousands of related entities," making it *"difficult to get a clear picture of their full attack surface"*. Important assets can be overlooked simply because they were not on the main corporate domain. * **Constant Change:** Mergers, acquisitions, and divestitures mean the set of assets is constantly evolving. Without continuous updates, asset inventories become outdated quickly. IP addresses and domains can change ownership or get spun up and down rapidly in cloud environments. Keeping track of these changes manually is untenable. * **Fragmented Data Sources:** Information about subsidiaries is often scattered (e.g. in financial databases, press releases, WHOIS records). As a result, mapping out which domains or systems are owned by your company (versus third parties) can require extensive research. This fragmentation leads to **blind spots** in security monitoring. * **Risk of Unknown Assets:** Perhaps the biggest risk is that **unknown or unmanaged assets can lead to security incidents**. If a security team is only monitoring the primary organization's domains, a forgotten website under a subsidiary could become an easy target. As one security engineer described, without a centralized view "*new assets could pop up without our knowledge, creating potential vulnerabilities like subdomain takeovers*". In other words, attackers might exploit an obscure subsidiary's forgotten cloud bucket or an old acquisition's server if the defenders aren't even aware it exists. These challenges mean that traditional approaches (spreadsheets of subsidiaries, manual scans, etc.) often fail to provide complete coverage. Security teams end up reactive – finding out about a subsidiary's exposure only after an incident or external report. Clearly, a more automated, scalable solution is needed for subsidiary and multi-organization asset management. ## How ProjectDiscovery Solves This Problem ProjectDiscovery's platform introduces automated features that **eliminate the manual legwork** of subsidiary asset discovery. It leverages external data and intelligent correlation to map out an enterprise's entire digital footprint across all related organizations, with minimal user input. Key capabilities include: * **Automated Subsidiary Correlation:** ProjectDiscovery integrates with the Crunchbase API to automatically identify which companies and domains are associated with your organization. As soon as you onboard, the platform pulls in known subsidiaries and related entities from Crunchbase's extensive corporate database. This means security teams *immediately* see a list of subsidiaries and their known domains without having to manually research corporate filings or news articles. By using this external intelligence, ProjectDiscovery can **map subsidiaries to assets** and help track associated assets across \[your] entire corporate structure. * **Seamless Onboarding of Subsidiary Assets:** The platform presents this extended view during onboarding – giving users an instant snapshot of their organization's broad footprint as they set up their account. Instead of starting with a blank slate, an enterprise user logging into ProjectDiscovery for the first time might immediately see that the platform has identified, for example, *"SubsidiaryX.com, SubsidiaryY.net, and BrandZ.com"* as belonging to their company. This **jump-starts the asset inventory** by automatically including the web properties of all child organizations. Such visibility, right at onboarding, ensures no major branch of the business is initially overlooked. * **Recognition of Brands and Owned Domains:** Subsidiary discovery in ProjectDiscovery isn't limited to exact company names – it also helps surface related domains or brands. For example, if your organization owns multiple product brands each with their own website, the platform can recognize those as part of your attack surface. It correlates various clues (DNS records, SSL certificates, WHOIS info, etc.) to cluster assets by ownership. As a result, security teams get a unified view of everything "owned" by the broader organization, even if operated under different names. * **Continuous Enrichment and Updates:** ProjectDiscovery's asset correlation is not a one-time static pull. It is continuously being enhanced. Upcoming improvements will use **reverse WHOIS lookups** to find additional owned domains and associated entities that might not be obvious from corporate listings. This will further expand coverage by catching assets that share registration details or contact emails with the organization. The platform is also opening up these discovery capabilities via API for the community, so its subsidiary detection engine will keep getting smarter over time. For the security team, this means the asset inventory grows and updates automatically as new information surfaces – without manual effort. By automating subsidiary and multi-organization asset discovery, ProjectDiscovery **saves countless hours** of manual mapping and drastically reduces the chances of missing a part of your attack surface. Security teams no longer need to maintain separate inventories or perform ad-hoc research whenever the company expands; the platform handles it for them in the background. All assets across the parent company and its subsidiaries funnel into one consolidated inventory for monitoring. # AI Assistance Source: https://docs.projectdiscovery.io/cloud/editor/ai Review details on using AI to help generate templates for Nuclei and ProjectDiscovery AI Prompt [The Template Editor](https://cloud.projectdiscovery.io/) has AI to generate templates for vulnerability reports. This document helps to guide you through the process, offering usagwe tips and examples. ## Overview Powered by ProjectDiscovery's deep library of public Nuclei templates and a rich CVE data set, the AI understands a broad array of security vulnerabilities. First, the system interprets the user's prompt to identify a specific vulnerability. Then, it generates a template based on the steps required to reproduce the vulnerability along with all the necessary meta information to reproduce and remediate. ## Initial Setup Kick start your AI Assistance experience with these steps: 1. **Provide Detailed Information**: Construct comprehensive Proof of Concepts (PoCs) for vulnerabilities like Cross-Site Scripting (XSS), and others. 2. **Understand the Template Format**: Get to grips with the format to appropriately handle and modify the generated template. 3. **Validation and Linting**: Use the integrated linter to guarantee the template's validity. 4. **Test the Template**: Evaluate the template against a test target ensuring its accuracy. ## Best Practices * **Precision Matters**: Detailed prompts yield superior templates. * **Review and Validate**: Consistently check matchers' accuracy. * **Template Verification**: Validate the template on known vulnerable targets before deployment. ## Example Prompts The following examples demonstrate different vulnerabilities and the corresponding Prompt. Open redirect vulnerability identified in a web application. Here's the PoC: HTTP Request: ``` GET /redirect?url=http://malicious.com HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 ``` HTTP Response: ``` HTTP/1.1 302 Found Location: http://malicious.com Content-Length: 0 Server: Apache ``` The application redirects the user to the URL specified in the url parameter, leading to an open redirect vulnerability. SQL Injection vulnerability in a login form. Here's the PoC: HTTP Request: ``` POST /login HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded username=admin&password=' OR '1'='1 ``` HTTP Response: ``` HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Welcome back, admin

... ``` The application improperly handles user input in the password field, leading to an SQL Injection vulnerability.
Business Logic vulnerability in a web application's shopping cart function allows for negative quantities, leading to credit. Here's the PoC: HTTP Request: ``` POST /add-to-cart HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded product_id=1001&quantity=-1 ``` HTTP Response: ``` HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Product added to cart. Current balance: -$19.99

... ``` The application fails to validate the quantity parameter, resulting in a Business Logic vulnerability.
Server-side Template Injection (SSTI) vulnerability through a web application's custom greeting card function. Here's the PoC: ``` HTTP Request: POST /create-card HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded message={{7*7}} ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Your card: 49

... ``` The application processes the message parameter as a template, leading to an SSTI vulnerability.
Insecure Direct Object Reference (IDOR) vulnerability discovered in a website's user profile page. Here's the PoC: ``` HTTP Request: GET /profile?id=2 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Cookie: session=abcd1234 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Welcome, otheruser

... ``` The application exposes sensitive information of a user (ID: 2) who is not the authenticated user (session: abcd1234), leading to an IDOR vulnerability.
Path Traversal vulnerability identified in a web application's file download function. Here's the PoC: ``` HTTP Request: GET /download?file=../../etc/passwd HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 1827 Server: Apache root:x:0:0:root:/root:/bin/bash ``` The application fetches the file specified in the file parameter from the server file system, leading to a Path Traversal vulnerability. Business logic vulnerability in a web application's VIP subscription function allows users to extend the trial period indefinitely. Here's the PoC: ``` HTTP Request: POST /extend-trial HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Cookie: session=abcd1234 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache

Your VIP trial period has been extended by 7 days.

``` The application does not limit the number of times the trial period can be extended, leading to a business logic vulnerability.
Each of these examples provides HTTP Requests and Responses to illustrate the vulnerabilities. ## Limitations Please note that the current AI is trained primarily on HTTP data. Template generation for non-HTTP protocols is not supported at this time. Support for additional protocols is under development and will be available soon. # Templates & Editor FAQ Source: https://docs.projectdiscovery.io/cloud/editor/faq Answers to common questions about Nuclei templates and our cloud platform template editor Nuclei [templates](http://github.com/projectdiscovery/nuclei-templates) are the core of the Nuclei project and ProjectDiscovery Cloud Platform. The templates contain the actual logic that is executed in order to detect various vulnerabilities. The ProjectDiscovery template library contains **several thousand** ready-to-use **[community-contributed](https://github.com/projectdiscovery/nuclei-templates/graphs/contributors)** vulnerability templates. We are continuously working with our open source community to update and add templates as vulnerabilities are discovered. We maintain a [template guide](/templates/introduction/) for writing new and custom Nuclei templates. ProjectDiscovery Cloud Platform also provides AI support to assist in writing and testing custom templates. - Check out our documentation on the [Templates Editor](/cloud/editor/ai) for more information. Performing security assessment of an application is time-consuming. It's always better and time-saving to automate steps whenever possible. Once you've found a security vulnerability, you can prepare a Nuclei template by defining the required HTTP request to reproduce the issue, and test the same vulnerability across multiple hosts with ease. It's worth mentioning ==you write the template once and use it forever==, as you don't need to manually test that specific vulnerability any longer. Here are few examples from the community making use of templates to automate the security findings: * [https://dhiyaneshgeek.github.io/web/security/2021/02/19/exploiting-out-of-band-xxe/](https://dhiyaneshgeek.github.io/web/security/2021/02/19/exploiting-out-of-band-xxe/) * [https://blog.melbadry9.xyz/fuzzing/nuclei-cache-poisoning](https://blog.melbadry9.xyz/fuzzing/nuclei-cache-poisoning) * [https://blog.melbadry9.xyz/dangling-dns/xyz-services/ddns-worksites](https://blog.melbadry9.xyz/dangling-dns/xyz-services/ddns-worksites) * [https://blog.melbadry9.xyz/dangling-dns/aws/ddns-ec2-current-state](https://blog.melbadry9.xyz/dangling-dns/aws/ddns-ec2-current-state) * [https://projectdiscovery.io/blog/if-youre-not-writing-custom-nuclei-templates-youre-missing-out](https://projectdiscovery.io/blog/if-youre-not-writing-custom-nuclei-templates-youre-missing-out) * [https://projectdiscovery.io/blog/the-power-of-nuclei-templates-a-universal-language-of-vulnerabilities](https://projectdiscovery.io/blog/the-power-of-nuclei-templates-a-universal-language-of-vulnerabilities) Nuclei templates are selected as part of any scans you create. You can select pre-configured groups of templates, individual templates, or add your own custom templates as part of your scan configuration. * Check out [the scanning documentation]('/cloud/scanning/overview') to learn more. You are always welcome to share your templates with the community. You can either open a [GitHub issue](https://github.com/projectdiscovery/nuclei-templates/issues/new?assignees=\&labels=nuclei-template\&template=submit-template.md\&title=%5Bnuclei-template%5D+template-name) with the template details or open a GitHub [pull request](https://github.com/projectdiscovery/nuclei-templates/pulls) with your Nuclei templates. If you don't have a GitHub account, you can also make use of the [discord server](https://discord.gg/projectdiscovery) to share the template with us. You own any templates generated by the AI through the Template Editor. They are your property, and you are granted a perpetual license to use and modify them as you see fit. The Template Editor feature in PDCP uses OpenAI. Yes, prompts are stored as part of the generated template metadata. This data is deleted as soon as the template or the user are deleted. The accuracy of the generated templates is primarily dependent on the detail and specificity of the input you provide. The more detailed information you supply, the better the AI can understand the context and create an accurate template. However, as with any AI tool, it is highly recommended to review, validate, and test any generated templates before using them in a live environment. No, AI does not use the templates you generate for further training or improvement of the AI model. The system only uses public templates and CVE data for training, ensuring your unique templates remain confidential. # Template Editor Overview Source: https://docs.projectdiscovery.io/cloud/editor/overview Learn more about using the Nuclei Templates Editor For more in-depth information about Nuclei templates, including details on template structure and supported protocols [check out](/templates/introduction). [The Template Editor](https://cloud.projectdiscovery.io/public/public-template) is a multi-functional cloud-hosted tool designed for creating, running, and sharing templates (Nuclei and ProjectDiscovery). It's packed with helpful features for individual and professional users seeking to manage and execute templates. ![Templates Editor](https://mintlify.s3.us-west-1.amazonaws.com/projectdiscovery/images/editor.jpg) ## Template Compatibility In addition to the Template Editor, our cloud platform supports any templates compatible with [Nuclei](nuclei/overview). These templates are exactly the same powerful YAML format supported in open source. Take a look at our [Templates](/Templates/introduction) documentation for a wealth of resources available around template design, structure, and how they can be customized to meet an enormous range of use cases. As always, if you have questions [we're here to help](/help/home). ## Features Current and upcoming features: | Feature | Description and Use | Availability | | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | | **Editor** | Experience something akin to using VS Code with our integrated editor, built on top of Monaco. This feature allows easy writing and modification of Nuclei Templates. | Free | | **Optimizer** | Leverage the in-built TemplateMan API to automatically lint, format, validate, and enhance your Nuclei Templates. | Free | | **Scan (URL)** | Run your templates on a targeted URL to check their validity. | Free \* | | **Debugger** | Utilize the in-built debugging function that displays requests and responses of your template scans, aiding troubleshooting and understanding template behavior. | Free | | **Cloud Storage** | Store and access your Nuclei Templates securely anytime, anywhere using your account. | Free | | **Sharing** | Share your templates for better collaboration by generating untraceable unique links. | Free | | **AI Assistance** | Employ AI to craft Nuclei Templates based on the context of specified vulnerabilities. This feature simplifies template creation and tailors them to minimize the time required for creation. | Free \* | | **Scan (LIST, CIDR, ASN)** | In the professional version, run scans on target lists, network ranges (CIDR), AS numbers (ASN). | Teams | | **REST API** | In the professional version, fetch templates, call the AI, and perform scans remotely using APIs. | Teams | | **PDCP Sync** | Sync your generated templates with our cloud platform for easy access and management, available in the professional version. | Teams | ## Free Feature Limitations Some features available within the free tier have usage caps in place: * **Scan (URL):** You're allowed up to **100** scans daily. * **AI Assistance:** Up to **10** queries can be made each day. The limitations, reset daily, ensure system integrity and availability while providing access to key functions. ## How to Get Started Begin by ensuring you have an account. If not, sign up on [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io/sign-up) and follow the steps below: 1. Log in to your account at [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io). 2. Click on the "**Create new template**" button to open up a fresh editor. 3. Write and modify your template. The editor includes tools like syntax highlighting, snippet suggestions, and other features to simplify the process. 4. After writing your template, input your testing target and click the "**Scan**" button to authenticate your template's accuracy.