# Export list of unique domains discovered in an enumeration Source: https://docs.projectdiscovery.io/api-reference/asset/export-list-of-unique-domains-discovered-in-an-enumeration get /v1/asset/enumerate/{enumerate_id}/domains/export Export list of unique domains discovered in an enumeration. # Get Asset Changelogs for a given asset_id Source: https://docs.projectdiscovery.io/api-reference/assets/get-asset-changelogs-for-a-given-asset_id get /v1/asset/{asset_id}/changelogs Get asset changelogs # Add Config Source: https://docs.projectdiscovery.io/api-reference/configurations/add-config post /v1/scans/config Add a new scan configuration # Add custom severity mapping Source: https://docs.projectdiscovery.io/api-reference/configurations/add-custom-severity-mapping post /v1/scans/config/severity # Add excluded templates and targets Source: https://docs.projectdiscovery.io/api-reference/configurations/add-excluded-templates-and-targets post /v1/scans/config/exclude Add excluded templates or targets # Delete Config Source: https://docs.projectdiscovery.io/api-reference/configurations/delete-config delete /v1/scans/config/{config_id} Delete scan configuration # Delete custom severity mapping Source: https://docs.projectdiscovery.io/api-reference/configurations/delete-custom-severity-mapping delete /v1/scans/config/severity # Delete excluded templates or targets by ids Source: https://docs.projectdiscovery.io/api-reference/configurations/delete-excluded-templates-or-targets-by-ids delete /v1/scans/config/exclude Delete excluded templates or targets by ids # Get Config Source: https://docs.projectdiscovery.io/api-reference/configurations/get-config get /v1/scans/config/{config_id} Get a scan configuration # Get Configs List Source: https://docs.projectdiscovery.io/api-reference/configurations/get-configs-list get /v1/scans/config Get user scan configurations list # Get custom severity mappings Source: https://docs.projectdiscovery.io/api-reference/configurations/get-custom-severity-mappings get /v1/scans/config/severity # Get excluded templates Source: https://docs.projectdiscovery.io/api-reference/configurations/get-excluded-templates get /v1/scans/config/exclude Get excluded templates # Modify custom severity mapping Source: https://docs.projectdiscovery.io/api-reference/configurations/modify-custom-severity-mapping patch /v1/scans/config/{config_id}/severity # Update Config Source: https://docs.projectdiscovery.io/api-reference/configurations/update-config patch /v1/scans/config/{config_id} Update existing scan configuration # Create an asset group Source: https://docs.projectdiscovery.io/api-reference/enumerations/create-an-asset-group post /v1/asset/enumerate/group Create an asset group from existing enumeration data using filters # Create Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/create-enumeration post /v1/asset/enumerate Create a new enumeration # Delete an asset group Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-an-asset-group delete /v1/asset/enumerate/group/{group_id} Delete an asset group by id # Delete Assets in bulk Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-assets-in-bulk delete /v1/asset/enumerate Delete enumeration by enumerate ids # Delete Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-enumeration delete /v1/asset/enumerate/{enumerate_id} Delete enumeration by enumerate_id # Delete Enumeration Schedule Source: https://docs.projectdiscovery.io/api-reference/enumerations/delete-enumeration-schedule delete /v1/enumeration/schedule Delete a re-scan schedule # Export Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/export-enumeration get /v1/asset/enumerate/{enum_id}/export Export enumeration content # Export Enumeration of user Source: https://docs.projectdiscovery.io/api-reference/enumerations/export-enumeration-of-user get /v1/asset/enumerate/export Export enumeration content # Get Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration get /v1/asset/enumerate/{enumerate_id} Get enumeration by enumerate_id # Get enumeration config Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-config get /v1/asset/enumerate/{enumerate_id}/config # Get Enumeration List Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-list get /v1/asset/enumerate Get enumeration list # Get Enumeration Schedules Source: https://docs.projectdiscovery.io/api-reference/enumerations/get-enumeration-schedules get /v1/enumeration/schedule Get enumeration re-scan schedule # Rescan Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/rescan-enumeration post /v1/asset/enumerate/{enumerate_id}/rescan Re-run a existing enumeration # Set Enumeration Schedule Source: https://docs.projectdiscovery.io/api-reference/enumerations/set-enumeration-schedule post /v1/enumeration/schedule Set enumeration re-scan frequency # Update an asset group Source: https://docs.projectdiscovery.io/api-reference/enumerations/update-an-asset-group patch /v1/asset/enumerate/group/{group_id} Update an asset group by customising the filters # Update Enumeration Source: https://docs.projectdiscovery.io/api-reference/enumerations/update-enumeration patch /v1/asset/enumerate/{enumerate_id} Update enumeration by enumerate_id # Export filtered Scan results Source: https://docs.projectdiscovery.io/api-reference/export/export-filtered-scan-results post /v1/scans/results/export Export filtered scan results # Full Text Search Source: https://docs.projectdiscovery.io/api-reference/full-text-search get /v2/vulnerability/search Full text search on vulnerabilities # Get All Filters for Vulnerabilities Source: https://docs.projectdiscovery.io/api-reference/get-all-filters-for-vulnerabilities get /v2/vulnerability/filters Get all filters for vulnerabilities # Get asset enumeration history data Source: https://docs.projectdiscovery.io/api-reference/get-asset-enumeration-history-data get /v1/asset/enumerate/{enumerate_id}/history Get asset enumeration history data # Get audit logs for team Source: https://docs.projectdiscovery.io/api-reference/get-audit-logs-for-team get /v1/team/audit_log # Get Vulnerability by ID Source: https://docs.projectdiscovery.io/api-reference/get-vulnerability-by-id get /v2/vulnerability/{id} Get Vulnerability by ID # Get scan log stats of given scan id Source: https://docs.projectdiscovery.io/api-reference/history/get-scan-log-stats-of-given-scan-id get /v1/scans/vulns/history # Add Team Member Source: https://docs.projectdiscovery.io/api-reference/internal/add-team-member post /v1/user/team/member Invite a new team member # Create Workspace Source: https://docs.projectdiscovery.io/api-reference/internal/create-workspace post /v1/user/team Create a new team # Delete Team Source: https://docs.projectdiscovery.io/api-reference/internal/delete-team delete /v1/user/team Delete team (require 0 members) # Delete Team Member Source: https://docs.projectdiscovery.io/api-reference/internal/delete-team-member delete /v1/user/team/member Delete a team member using member email # Get Team Source: https://docs.projectdiscovery.io/api-reference/internal/get-team get /v1/user/team Get a team metadata # Get Team Members Source: https://docs.projectdiscovery.io/api-reference/internal/get-team-members get /v1/user/team/member Get team member list # Update Team Source: https://docs.projectdiscovery.io/api-reference/internal/update-team patch /v1/user/team Update a existing team # Update Team Member Source: https://docs.projectdiscovery.io/api-reference/internal/update-team-member patch /v1/user/team/member Accept team invite # Cloud API Reference Introduction Source: https://docs.projectdiscovery.io/api-reference/introduction Details on the ProjectDiscovery API ## Overview The ProjectDiscovery API v1 is organized around [REST](http://en.wikipedia.org/wiki/Representational_State_Transfer). Our API has resource-oriented URLs, accepts and returns JSON in most cases, and the API uses standard HTTP response codes, authentication, and verbs. Our API also conforms to the [OpenAPI Specification](https://www.openapis.org/). This API documentation will walk you through each of the available resources, and provides code examples for `cURL`, `Python`, `JavaScript`, `PHP`, `Go` and `Java`. Each endpoint includes the required authorization information and parameters, and provides examples of the response you should expect. ## Authentication The ProjectDiscovery API uses API keys to authenticate requests. You can view and manage your API key on ProjectDiscovery [Dashboard](https://cloud.projectdiscovery.io/) or in left navbar Settings > [API Key](https://cloud.projectdiscovery.io/settings/api-key). Authentication with the API is performed using a custom request header - `X-Api-Key` - which should simply be the value of your API key found with your ProjectDiscovery account. You must make all API calls over `HTTPS`. Calls made over plain HTTP will fail, as will requests without authentication or without all required parameters. ## Resources Below (and in the menu on the left) you can find the various resources available to the ProjectDiscovery API. Access public and private templates, AI template generation, and template sharing. Manage scans, scan schedules, import/export, and scan configurations. View and manage vulnerabilities, changelogs, and retest results. Monitor domain leaks, email leaks, and customer leak data. Manage API keys, user tunnels, team members, and workspace settings. Perform asset enumeration, manage asset groups, and track discovery history. Configure scan settings, severity mappings, and template exclusions. Export scan results, download logs, and access audit information. Full-text search, filters, and additional utility functions. # Get Customer Leaks Source: https://docs.projectdiscovery.io/api-reference/leaks/get-domain-customer-leaks get /v1/leaks/domain/customers # Get Domain Leaks Source: https://docs.projectdiscovery.io/api-reference/leaks/get-domain-leaks get /v1/leaks/domain # Get Email Leaks Source: https://docs.projectdiscovery.io/api-reference/leaks/get-email-leaks get /v1/leaks/email # Rename Tunnel Source: https://docs.projectdiscovery.io/api-reference/rename-tunnel patch /v1/user/tunnels # Get all Vulnerability Changelogs Source: https://docs.projectdiscovery.io/api-reference/results/get-all-vulnerability-changelogs get /v1/scans/vuln/changelogs get changelogs of all vulnerabilities # Get Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/results/get-scan-vulnerability get /v1/scans/vuln/{vuln_id} Get scan result vulnerability by ID # Get Vulnerability Changelogs Source: https://docs.projectdiscovery.io/api-reference/results/get-vulnerability-changelogs get /v1/scans/vuln/{vuln_id}/changelogs get changelogs of specific vulnerability by id # Get Retest Vulnerability Source: https://docs.projectdiscovery.io/api-reference/retests/get-retest-vulnerability get /v1/retest/{vuln_id} Get retest vulnerability (retests from editor) # Export scan log of given scan id Source: https://docs.projectdiscovery.io/api-reference/scan_log/export-scan-log-of-given-scan-id get /v1/scans/{scan_id}/scan_log/export # Create Scan Source: https://docs.projectdiscovery.io/api-reference/scans/create-scan post /v1/scans Trigger a scan # Delete Scan Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan delete /v1/scans/{scan_id} Delete a scan using scanId # Delete Scan in bulk Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-in-bulk delete /v1/scans Delete scans using scan ids # Delete Scan Schedule Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-schedule delete /v1/scans/schedule Delete scan schedule for a user # Delete Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/delete-scan-vulnerability delete /v1/scans/vulns Batch Delete scan vulnerability # Export Filtered Scan Source: https://docs.projectdiscovery.io/api-reference/scans/export-filtered-scan post /v1/scans/{scan_id}/export Export filtered scan results # Export list of unique assets for a scan Source: https://docs.projectdiscovery.io/api-reference/scans/export-list-of-unique-assets-for-a-scan get /v1/scans/{scan_id}/asset/export Export the list of all unique assets for a scan. # Export Scan Source: https://docs.projectdiscovery.io/api-reference/scans/export-scan get /v1/scans/{scan_id}/export Export scan results # Export Scan Vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/export-scan-vulnerability get /v1/scans/vuln/{vuln_id}/export Export a specific scan vulnerability # Get All Scans History Source: https://docs.projectdiscovery.io/api-reference/scans/get-all-scans-history get /v1/scans/history Get user scan history details # Get Scan Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan get /v1/scans/{scan_id} Get details of a scan by scan ID # Get Scan Config Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-config get /v1/scans/{scan_id}/config Get scan metadata config # Get Scan History Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-history get /v1/scans/{scanId}/history Get scan history detial by scanId # Get Scan IPs Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-ips get /v1/scans/scan_ips Get user static scan IPs list # Get Scan List Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-list get /v1/scans Get user scans status # Get Scan Schedules Source: https://docs.projectdiscovery.io/api-reference/scans/get-scan-schedules get /v1/scans/schedule Get scan schedules for a user # Import OSS Scan Source: https://docs.projectdiscovery.io/api-reference/scans/import-oss-scan post /v1/scans/import Import scan details # Rescan scan Source: https://docs.projectdiscovery.io/api-reference/scans/rescan-scan post /v1/scans/{scan_id}/rescan Re-run a existing scan # Retest vulnerability Source: https://docs.projectdiscovery.io/api-reference/scans/retest-vulnerability post /v1/scans/{vuln_id}/retest Retest a scan vulnerability # Set Scan Schedule Source: https://docs.projectdiscovery.io/api-reference/scans/set-scan-schedule post /v1/scans/schedule set a scan schedule for a user # Stop Scan Source: https://docs.projectdiscovery.io/api-reference/scans/stop-scan post /v1/scans/{scan_id}/stop Stop a running scan, not applied in any other state. # Update Imported Scan Source: https://docs.projectdiscovery.io/api-reference/scans/update-imported-scan patch /v1/scans/{scan_id}/import Import more results to a given scan # Update Scan Source: https://docs.projectdiscovery.io/api-reference/scans/update-scan patch /v1/scans/{scan_id} Update scan metadata # Update Scan Config Source: https://docs.projectdiscovery.io/api-reference/scans/update-scan-config patch /v1/scans/{scan_id}/config Update scan metadata config # Update Vulnerability Labels Source: https://docs.projectdiscovery.io/api-reference/scans/update-vulnerability-labels patch /v1/scans/vulns/labels Batch update vulnerability labels # Update Vulnerability Status Source: https://docs.projectdiscovery.io/api-reference/scans/update-vulnerability-status patch /v1/scans/vulns Batch update vulnerability status # Create Template Source: https://docs.projectdiscovery.io/api-reference/templates/create-template post /v1/template Create a private template # Delete Template Source: https://docs.projectdiscovery.io/api-reference/templates/delete-template delete /v1/template/{template_id} Delete private template using ID # Generate AI Template Source: https://docs.projectdiscovery.io/api-reference/templates/generate-ai-template post /v1/template/ai Generate a private template with AI Engine # Get Early Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-early-template get /v1/template/early/{id} Get early template text # Get Early Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-early-template-list get /v1/template/early Get pdcp early template lists # Get Public Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-public-template get /v1/template/public/{template_id} Get public template data # Get Public Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-public-template-list get /v1/template/public Get public-template list # Get Shared Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-shared-template get /v1/template/share/{template_id} Get a shared template text # Get Shared Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-shared-template-list get /v1/template/share Shared template list # Get Template Source: https://docs.projectdiscovery.io/api-reference/templates/get-template get /v1/template/{template_id} Get private template text using ID # Get Template List Source: https://docs.projectdiscovery.io/api-reference/templates/get-template-list get /v1/template Get user private(my) templates # Share Template Source: https://docs.projectdiscovery.io/api-reference/templates/share-template post /v1/template/share Share a private template (shared-with-link) # Update Template Source: https://docs.projectdiscovery.io/api-reference/templates/update-template patch /v1/template Update existing private template # Search Templates Source: https://docs.projectdiscovery.io/api-reference/templatev2/search-templates get /v2/template/search Search templates with filtering, sorting, and faceting capabilities # Unshare/Delete template Source: https://docs.projectdiscovery.io/api-reference/unsharedelete-template delete /v1/template/share/{template_id} # Update enumeration config Source: https://docs.projectdiscovery.io/api-reference/update-enumeration-config patch /v1/asset/enumerate/{enumerate_id}/config # Update Shared Template Source: https://docs.projectdiscovery.io/api-reference/update-shared-template patch /v1/template/share/{template_id} # Create API Key Source: https://docs.projectdiscovery.io/api-reference/users/create-api-key post /v1/user/apikey Create user api-key, this won't create a new api-key if it already exists. # Delete API Key Source: https://docs.projectdiscovery.io/api-reference/users/delete-api-key delete /v1/user/apikey Delete user api-key # Get API Key Source: https://docs.projectdiscovery.io/api-reference/users/get-api-key get /v1/user/apikey Get user api-key # Get Tunnels List Source: https://docs.projectdiscovery.io/api-reference/users/get-tunnels-list get /v1/user/tunnels # Rotate API Key Source: https://docs.projectdiscovery.io/api-reference/users/rotate-api-key post /v1/user/apikey/rotate Rotate user api-key # Settings & Administration Source: https://docs.projectdiscovery.io/cloud/admin Review administrative, team, and account settings ## Summary This guide covers general account administration under settings in our cloud platform. These administrative and system settings include details about your account, team settings for administrators, and password/2FA. If you have questions about settings that are not covered here, or functionality that you think would be helpful - [get in touch.](/help) For details on other settings check out the guides for those features. * [Scanning](/cloud/scanning/overview) * [Assets](/cloud/assets/overview) * [Templates](/cloud/editor/overview) ## Settings [Profile settings](https://cloud.projectdiscovery.io/settings) are available from the left side navigation under "settings" for access to your Profile, Team, Scan IPs and more. ## Profile Profile displays your username, email address, and the option to delete your account. *Note: The ability to update these profile components will be available in a future release.* ## Team Under **Settings → Team** all users can view team settings. Users with the appropriate permissions can also modify team settings and manage team members. View or update team names, manage team members, and delete teams (supported for team owners) * Use **Create Team** to create a new team (Only available for Enterprise users). * Modify team settings by selecting a team from the global navigation. To modify team settings select a team from the global navigation to display those team settings. ### User Types ProjectDiscovery supports four types of users with the following permissions: * Owner: Read, write, invite, billing * Admin: Read, write, invite * Member: Read, write * Viewer: Read ## Scan IPs Add Static IPs for greater control over your infrastructure scanning. ## Billing Purchase, view, or modify your subscription. Also, you can view current usage as well as historical usage of scans. Additional upgrade options are also available with higher monthly asset limits - reach out to us with any questions about a custom contract. ## Security (Account Security) Use Security to update your password or to enable 2-factor authentication. * **Password** creates an account password that provides a login with your email (username) and password, as an alternative to using a linked account for login. These credentials will not replace any existing login configurations (for example:GitHub) * **Two-step authentication** provides additional authentication for your account with an authenticator application. # Audit Logs Source: https://docs.projectdiscovery.io/cloud/admin/audit-logs Track and monitor all user activities and system events across your organization Audit Logs are available exclusively for Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to learn more about Enterprise features. ProjectDiscovery's Audit Logs provide comprehensive visibility into all user activities and system events within your organization's ProjectDiscovery Cloud environment. The audit logging system captures detailed information about every significant action, including user logins, asset modifications, scan initiations, configuration changes, and API access events. Each log entry contains essential metadata such as the timestamp, user identity, IP address, action type, and affected resources, enabling security teams to maintain complete accountability and traceability. The audit logging interface presents events in a chronological timeline, with advanced filtering capabilities that allow you to search and analyze specific types of activities. Security administrators can filter logs based on multiple parameters including time ranges, user identities, action types, and affected resources. This granular filtering helps during security investigations, compliance audits, or when tracking specific changes across your organization's security workflows. From a security operations perspective, the audit logs serve as a crucial tool for detecting unusual patterns or potentially unauthorized activities. For instance, you can identify unusual scan patterns, track template modifications, or monitor API key usage across your organization. The system retains audit logs for an extended period, ensuring you have historical data available for compliance requirements or security investigations. Integration capabilities allow you to export audit logs to your existing security information and event management (SIEM) systems through our API. This enables you to incorporate ProjectDiscovery activity data into your broader security monitoring and alerting workflows. The audit log data can be particularly valuable during incident response scenarios, providing a clear timeline of events and actions leading up to or following a security event. For organizations with compliance requirements, our audit logs help demonstrate adherence to various security frameworks and regulations. The comprehensive logging of user actions, access patterns, and system changes provides the necessary documentation for security audits and compliance reviews. Each log entry is immutable and cryptographically signed, ensuring the integrity of your audit trail. # SAML SSO Source: https://docs.projectdiscovery.io/cloud/admin/saml-sso Enterprise Single Sign-On (SSO) integration for secure team access SAML SSO is available exclusively for Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to enable SAML SSO for your organization. ProjectDiscovery supports Enterprise Single Sign-On (SSO) through SAML 2.0, enabling seamless and secure authentication using your organization's Identity Provider (IdP). Our SAML implementation is powered by Clerk, providing robust support for major identity providers including: * Microsoft Azure AD * Google Workspace * Okta Workforce * Custom SAML Providers ## Implementation Process SAML SSO setup requires manual configuration and verification by the ProjectDiscovery team to ensure secure implementation. Here's what to expect: 1. **Initial Setup Request** * After purchasing an Enterprise contract, ProjectDiscovery team will reach out to begin the configuration process * You'll be assigned a dedicated technical contact for the setup 2. **Configuration Steps** * Provide your IdP metadata and certificates * Configure allowed domains and user attributes * Set up SAML assertion mapping * Test the integration in a staging environment 3. **Verification & Go-Live** * Validate user provisioning and authentication * Confirm security settings and access controls * Enable the integration for production use ## Supported Features Our SAML integration includes comprehensive enterprise-grade features: * **Automated User Provisioning** * Attribute mapping for user profiles * Role and permission synchronization * **Security Controls** * Domain-based access restrictions * Enforced SSO for specified domains * Session management and timeout settings * **Advanced Options** * Support for IdP-initiated SSO * Multi-factor authentication integration * Custom attribute mapping ## Important Notes * SAML SSO setup requires manual configuration due to its security-critical nature * The setup process typically takes 1-2 business days * All configurations are thoroughly tested before production deployment * Changes to SAML settings may require ProjectDiscovery team assistance * Existing users can be migrated to SSO authentication seamlessly ## Getting Started To enable SAML SSO for your organization: 1. Ensure you have an Enterprise contract 2. Contact your account representative or [sales team](https://projectdiscovery.io/request-demo) 3. Prepare your IdP configuration details 4. Schedule a setup call with our technical team Our team will guide you through the entire process, ensuring a secure and successful implementation of SAML SSO for your organization. # Scan IPs for Whitelisting Source: https://docs.projectdiscovery.io/cloud/admin/scan-ips Configure and manage scanning IP addresses for enterprise security controls Dedicated Scan IPs are available exclusively for Enterprise customers. Contact our [sales team](https://projectdiscovery.io/request-demo) to learn more about Enterprise features. ProjectDiscovery's Enterprise scanning infrastructure operates from a dedicated set of static IP addresses, enabling organizations to implement precise security controls and whitelisting policies. These fixed IP ranges are exclusively assigned to your organization's scanning activities, providing consistent and identifiable sources for all security assessments conducted through the platform. This dedicated IP infrastructure ensures that your security teams can easily distinguish ProjectDiscovery's legitimate scanning traffic from potential unauthorized scanning attempts. When configuring your security infrastructure to accommodate ProjectDiscovery scans, you can whitelist these specific IP addresses in your firewalls, Web Application Firewalls (WAFs), or Intrusion Prevention Systems (IPS). This whitelisting approach allows you to maintain strict security controls while ensuring uninterrupted vulnerability scanning operations. The platform provides both IPv4 and IPv6 addresses, supporting organizations with diverse network configurations and compliance requirements. Enterprise customers can customize scanning behavior on a per-IP basis, including the ability to set specific rate limits, configure custom headers, or assign particular IPs to different types of scans. This granular control helps organizations optimize their scanning operations while maintaining compliance with internal security policies. For instance, you might assign certain IPs for external asset discovery while reserving others for intensive vulnerability scanning, ensuring proper resource allocation and traffic management. The platform includes monitoring and analytics for scan traffic from these IPs, providing visibility into scanning patterns, bandwidth usage, and potential scanning issues. This monitoring helps security teams optimize their scanning strategies and troubleshoot any connectivity or performance problems. Additionally, if any of your security systems flag scanning activity from these IPs, you can quickly verify the legitimacy of the traffic against your assigned IP ranges. For organizations operating in regulated environments or with strict security requirements, our dedicated IP infrastructure provides the necessary isolation and control. Each scanning IP is documented and can be included in security compliance documentation, making it easier to demonstrate proper security controls during audits. The platform also supports custom DNS resolution and proxy configurations when needed for specialized scanning scenarios. # Adding Assets Source: https://docs.projectdiscovery.io/cloud/assets/adding-assets Learn how to add and manage assets in ProjectDiscovery ## Overview Assets in our cloud platform can be any hosts you want to monitor - URLs, IP addresses, or CIDR ranges. There are three primary methods to add assets: Automatically discover and monitor assets from your root domains Connect cloud providers to import and sync assets automatically Programmatically add and manage assets using our REST API ## Asset Discovery The fastest way to get started is through our asset discovery feature: 1. Navigate to **Assets → Add New Assets** 2. Enter root domains/CIDR/IPs. Discovery features can be customized for Enterprise plans. Contact our [sales team](mailto:sales@projectdiscovery.io) for custom requirements. **Cloudflare Port Exclusions**: By default, the asset discovery process excludes [Cloudflare proxy-compatible ports](https://developers.cloudflare.com/fundamentals/reference/network-ports/#network-ports-compatible-with-cloudflares-proxy) during enumeration to optimize scanning efficiency and reduce noise. This includes ports 8080, 8880, 8443, 2052, 2082, 2086, 2095, 2053, 2083, 2087, and 2096 when Cloudflare infrastructure is detected. Standard HTTP/HTTPS ports (80, 443) are not excluded. This behavior is enabled by default and cannot be configured by users. If you need to disable this exclusion behavior, please contact our [support team](mailto:support@projectdiscovery.io). # Custom & Bulk Asset Labeling Source: https://docs.projectdiscovery.io/cloud/assets/custom-labeling Create and manage custom labels for your assets with powerful bulk labeling capabilities Custom Labels in ProjectDiscovery Cloud are user-defined tags that you can manually assign to any discovered asset. This feature works alongside the automatic, AI-driven labels that the platform generates. While the system's AI assigns labels for website types (e.g., API docs, internal apps, login pages, admin panels) and environments (e.g., production, staging, internal) by default, custom labels give you the flexibility to define your own categories and classifications for assets. In other words, you're not limited to the auto-generated labels – you can tag assets with labels that make sense for your organization's context (such as project names, owner teams, sensitivity, or any internal naming scheme). ### How They Work Using the ProjectDiscovery Cloud interface, a user can select an asset and assign one or more custom labels to it. These labels then appear alongside the asset in the inventory, just like the AI-generated labels. This manual labeling is valuable for capturing contextual information that automated methods might not know. For example, you might label certain assets as "Critical" if they pertain to core infrastructure, or tag a set of hosts as "Internal" if they should not be exposed to the internet. By labeling assets in a way that mirrors your environment and business, you ensure that important attributes of each asset are immediately visible. ### Benefits Custom labels allow security teams to organize assets according to custom criteria and quickly spot key asset properties at a glance. This user-driven categorization adds an extra layer of context – teams gain full control over how assets are categorized. It becomes easier to filter and group assets based on these tags (for example, viewing all assets labeled "Internal" or "Web-Server"). Ultimately, this leads to better asset management as the platform helps classify results to help you better organize, contextualize, and prioritize your assets. In practice, custom labels enable workflows like separating production vs. staging assets or flagging high-risk systems, so that teams can focus on relevant subsets of the attack surface during monitoring and scanning. ## Bulk Labeling ProjectDiscovery Cloud also supports Bulk Labeling, which lets users apply a label to many assets at once, rather than tagging each asset individually. This feature is implemented through the platform's powerful filtering system. Users can filter their asset list by specific criteria and then assign a label to all assets matching that filter in a few clicks. In effect, bulk labeling dramatically speeds up the process of categorizing large numbers of assets. ### How It Works The platform provides filtering across 14+ attributes of assets – you can narrow results by things like port number, technology, domain, IP, content length, and even by existing labels. Here's how to create and save bulk labels: 1. **Apply Filters** * Navigate to the Assets view * Click the "Filter" button in the top left * Select your desired filter criteria (e.g., port, technology, domain) * Apply multiple filters to refine your selection 2. **Select Assets** * After filtering, review the matching assets 3. **Apply Labels** * Click the "Label" button in the action bar * Enter your label name or select from existing labels * Click "Apply" to tag all selected assets 4. **Save as Dynamic Group** (Optional) * Click "Save Filter" in the top right * In the pop-up dialog, enter a name for your dynamic group * Click "Save" to create your dynamic group Your saved dynamic group will automatically update as new assets matching your filter criteria are discovered. For example, you could label all assets running on port 8088 as 'staging' in just a few clicks. This bulk tagging via filters approach means you don't have to manually edit each asset entry – the system streamlines it for you. ### Advantages Bulk labeling is especially useful for applying environment or role labels to many assets simultaneously. It ensures consistency at scale – every asset meeting the criteria gets the exact same label, avoiding omissions or typos that might happen with one-by-one tagging. It's also a huge time-saver for large asset sets; teams can categorize hundreds or thousands of assets in seconds by leveraging filters, instead of minutes or hours. By making it easy to tag assets in bulk, ProjectDiscovery helps teams maintain an organized asset inventory even as new data pours in. ## Use Cases and Workflow Integration Both custom labels and bulk labeling open up new use cases for integrating ProjectDiscovery into security team workflows: ### Environment Segmentation Teams can mark assets by environment (e.g., Development, Staging, Production) using custom labels. Bulk labeling makes it easy to apply these environment tags en masse. For example, filtering by port 8088 and tagging those assets as "staging" is a quick way to group all staging assets. This segmentation allows different handling of assets based on environment – for instance, running more frequent scans on production assets or applying stricter monitoring to internal-only systems. ### Technology or Port-based Grouping If many assets share a common attribute (such as a specific open port, technology, or domain pattern), you can filter them out and label them in bulk. For instance, label all assets running an outdated software version as "Legacy" or all assets on port 22 as "SSH-Servers." This practice helps in quickly identifying groups of assets that might require a specific security assessment or patching regimen. The filtering system supports multi-select and complex queries (e.g., all assets on either Nginx or Apache) to refine these groups. ### Dynamic Asset Groups for Monitoring After labeling assets, those labels can be used to create saved views or dynamic subgroups in the platform. A dynamic subgroup is essentially a saved filter that updates automatically as assets change. For example, once you've labeled certain assets as "Critical", you could save a filter for `label = Critical`. As new assets get tagged with "Critical" (either through AI suggestions or manual labeling), they will automatically appear in that group. This is highly useful for workflows like continuous monitoring or targeted vulnerability scanning – you always have an up-to-date list of assets in that category without rebuilding queries. ### Prioritization and Triage Custom labels can encode business context such as ownership (e.g., tagging an asset with the responsible team or project name) or criticality (e.g., High-Value, Low-Impact). Using bulk operations, a newly onboarded set of assets can quickly be labeled according to input from asset owners or CMDB data. Thereafter, security analysts can filter by these labels to prioritize issues. For example, during incident response or risk review, one might focus on assets labeled "Production" and "Customer-Facing" first, since an issue on those could be more severe. # Discovery Target Exclusions Source: https://docs.projectdiscovery.io/cloud/assets/exclusions Configure patterns to exclude specific targets from asset discovery ## Overview Discovery Target Exclusions allow you to proactively prevent specific assets or patterns from being discovered during asset enumeration. When exclusions are configured, these targets are actively filtered out of the discovery process, helping you focus on relevant assets and reduce noise in your asset inventory. This feature is particularly useful for excluding internal staging environments, test domains, government domains, or any other assets that should not be included in your attack surface monitoring. **Quick Access**: Discovery Target Exclusions are managed in [Settings → Discovery Target Exclusions](https://cloud.projectdiscovery.io/settings/exclusions). ## How It Works The exclusion system operates at the discovery layer, filtering out targets before they are added to your asset inventory. This ensures that excluded patterns are never discovered, scanned, or monitored by the platform. **Global Exclusions**: Target exclusions are applied globally across all discovery operations. Once configured, exclusions affect all current and future asset discoveries, not just individual discovery sessions. ### Supported Exclusion Types Exclude specific subdomains from discovery Exclude individual IP addresses or ranges Use wildcard patterns to exclude multiple targets ## Configuration ### Adding Exclusions 1. Navigate to **Settings → Discovery Target Exclusions** or visit [cloud.projectdiscovery.io/settings/exclusions](https://cloud.projectdiscovery.io/settings/exclusions) 2. Click **+ Add Exclusion** to open the exclusion configuration panel 3. Enter your exclusion patterns in the text area (one pattern per line) 4. Click **Add** to save your exclusions ### Exclusion Pattern Examples #### Basic Subdomain Exclusions ``` staging.company.com dev.company.com test.company.com internal-tools.company.com ``` #### Wildcard Patterns ``` *.staging.company.com test.*.company.com dev-*.internal.company.com ``` #### IP Address Exclusions ``` 192.168.1.100 10.0.0.0/8 172.16.0.0/12 ``` #### Government and Restricted Domains ``` *.gov *.mil *.edu ``` ## Pattern Syntax ### Wildcard Support The exclusion system supports wildcard patterns using the asterisk (`*`) character: * **Prefix wildcards**: `*.staging.company.com` - Excludes any subdomain ending with `.staging.company.com` * **Suffix wildcards**: `test.*.company.com` - Excludes any subdomain starting with `test.` and ending with `.company.com` * **Multiple wildcards**: `*.staging.*.company.com` - Supports multiple wildcards in a single pattern ### Pattern Matching Rules * Patterns are **case-insensitive** * Each line represents a separate exclusion pattern * Patterns are matched during the discovery phase * Once excluded, targets will not appear in any subsequent discovery results ## Best Practices Use wildcard patterns to exclude entire environment categories: ``` *.staging.company.com *.dev.company.com *.test.company.com ``` Exclude internal-only domains and IP ranges: ``` *.internal.company.com 10.0.0.0/8 192.168.0.0/16 172.16.0.0/12 ``` Respect organizational policies by excluding restricted domains: ``` *.gov *.mil *.edu client-*.company.com ``` Use broader patterns when possible to reduce configuration complexity: * Instead of listing individual staging subdomains, use `*.staging.company.com` * Group similar patterns together for better organization * Regularly review and update exclusion patterns as your infrastructure evolves ## Important Considerations **Exclusions are Permanent**: Once a target is excluded, it will not be discovered in future enumerations. Make sure your exclusion patterns are accurate to avoid missing important assets. **Discovery Impact**: Exclusions only affect the discovery process. If an asset was already discovered before adding an exclusion, it will remain in your inventory until manually removed. **Testing Patterns**: Start with specific exclusions and gradually expand to broader patterns. This helps ensure you don't accidentally exclude important assets. ## Managing Exclusions ### Viewing Current Exclusions All active exclusions are displayed in the [Discovery Target Exclusions](https://cloud.projectdiscovery.io/settings/exclusions) interface as individual items in a list format. Each exclusion shows: * The exact pattern configured * A remove button (X icon) for easy deletion ### Removing Exclusions To remove individual exclusions: 1. Navigate to **Settings → Discovery Target Exclusions** or visit [cloud.projectdiscovery.io/settings/exclusions](https://cloud.projectdiscovery.io/settings/exclusions) 2. Locate the exclusion you want to remove in the list 3. Click the **X** icon next to the exclusion pattern 4. The exclusion will be immediately removed from your configuration Removing exclusions will allow those targets to be discovered in future enumerations. ## Integration with Discovery Workflows Target exclusions integrate seamlessly with all discovery methods and are applied globally across the platform: * **Automatic Discovery**: Exclusions apply to all automated asset discovery processes * **Manual Enumeration**: Manually triggered discoveries respect exclusion patterns * **Cloud Integration**: Cloud-discovered assets are filtered against exclusion patterns **Global Application**: All exclusion patterns apply to every discovery operation across your organization, ensuring consistent filtering regardless of the discovery method or who initiates it. *** By implementing target exclusions, you can ensure that your asset discovery process focuses on the assets that matter most to your security posture while automatically filtering out noise and irrelevant targets. # Dynamic Asset Grouping Source: https://docs.projectdiscovery.io/cloud/assets/grouping Create and manage filtered asset groups for targeted visibility Dynamic asset grouping allows you to save and organize filtered asset views, making it easier to focus on specific subsets of your infrastructure. By creating saved filter combinations, security teams can quickly access the assets that matter most to them without having to reapply complex filter conditions each time. Dynamic groups source their data from parent asset discoveries and cannot be rescanned or refreshed independently. Any updates to the underlying assets will automatically reflect in the dynamic groups. ## Creating Dynamic Asset Groups 1. Navigate to your discovered **Assets groups** 2. Apply filters to your assets using the filter option (e.g., by label, technology, port, status) 3. Once you've created the desired view, click **Save filters** 4. Enter a descriptive name 5. Click **Save** to create your dynamic group Creating a dynamic asset group Your saved dynamic groups will appear in the same [list as your assets](https://cloud.projectdiscovery.io/assets). From here you can: * **Access groups:** Click any group name to instantly view that filtered subset * **Edit groups:** Hover over a group name and click the edit icon to modify the filters * **Delete groups:** Remove groups that are no longer needed via the group settings menu * **Share groups:** Generate shareable links for specific groups (Enterprise plan only) ## Use Cases and Best Practices Dynamic asset groups offer versatile applications across your organization's security workflows. Create team-specific views (DevOps focusing on cloud technologies, Security teams monitoring vulnerabilities, Compliance teams tracking regulated systems), environment-specific groups (production, development, third-party integrations), and security-priority filters (critical infrastructure, public-facing systems, legacy technologies). For optimal results, maintain descriptive naming conventions, document each group's purpose, regularly review and update as infrastructure evolves, limit group quantity to maintain focus, and combine with custom labels for more powerful filtering. This approach streamlines asset management while providing targeted visibility where it matters most. ## Limitations * Dynamic groups cannot be targeted for independent rescans * The results in a dynamic group will always reflect the most recent state of the parent discovery * Filter conditions apply only to discovered attributes - custom data cannot be used for filtering When the parent asset discovery is updated or rescanned, all associated dynamic groups will automatically reflect the new data without any manual intervention required. ## Related Features Learn how to use labels to better organize and filter your assets Create and apply custom labels to enhance your grouping strategy # AI-Powered Asset Labeling Source: https://docs.projectdiscovery.io/cloud/assets/labeling Automatically categorize and contextualize your assets with AI-driven labeling Asset labeling is currently in early beta and operates asynchronously. The initial labeling process may take some time as we optimize performance. We're actively working on speed improvements to make this process faster and more efficient. **Asset labeling** is the automated process of categorizing and contextualizing the assets discovered by ProjectDiscovery. Instead of presenting you with a raw list of domains or IPs, the platform intelligently **classifies assets** by attaching descriptive labels or tags to each one. These labels provide immediate context about what an asset is – for example, distinguishing a marketing website from an API endpoint or identifying a development server versus a production system. By automatically organizing assets into meaningful categories, asset labeling helps security teams understand their attack surface at a glance and focus on what matters most. In practical terms, once ProjectDiscovery discovers an asset, it will evaluate that asset's characteristics and assign labels that describe its role or nature. For instance, a web application login page might be labeled as a "Login Portal," or a host with a name like *staging.example.com* might get tagged as "Staging Environment" to indicate it's not a production system. Asset labeling bridges the gap between raw asset data and the business context behind those assets, making your asset inventory more informative and easier to navigate. ## How It Works ProjectDiscovery's asset labeling engine classifies assets by analyzing various pieces of information collected during discovery. It uses a combination of asset metadata, DNS information, HTTP responses, and even screenshots to determine how to label each asset: * **Asset Metadata:** Basic details about the asset (such as IP addresses, open ports, SSL certificate data, and hosting information) are examined for clues. For example, an SSL certificate's Common Name might reveal the application's name, or an IP's ASN could indicate the cloud provider or organization owning the asset. This metadata helps identify what the asset might be (e.g., a cloud storage bucket, a VPN gateway, etc.) and adds context for labeling. * **DNS Records:** DNS information is used to infer the asset's purpose or ownership. The domain or subdomain names can be very telling. For instance, an asset under `dev.` or `staging.` subdomains suggests a non-production environment, whereas something like `mail.example.com` could indicate an email server. CNAME records might point to a known service (for example, a CNAME to a SaaS provider's domain), which the platform can recognize and label accordingly. In short, ProjectDiscovery looks at hostnames and DNS details to glean context (like environment, service type, or associated product) that inform the asset's label. * **HTTP Responses:** For web assets, the content and behavior of the HTTP(S) service are analyzed. The platform uses its HTTP probing capabilities to gather response headers, status codes, and page content. This includes looking at the HTML title, body text, and other fingerprints. Certain keywords or patterns can identify the application type – for example, a page title containing "Login" or a form with password fields likely indicates a login portal, while a default page saying "Welcome to nginx" indicates a generic web server instance. The system also detects technologies and frameworks running on the asset (e.g., identifying a WordPress site or an Apache server from response signatures) via deep technology fingerprinting. All this HTTP-derived information feeds into the labeling decision. * **Screenshots:** ProjectDiscovery can capture screenshots of discovered web services. These screenshots provide a visual snapshot of the asset's interface. In the asset labeling process, screenshots serve as an additional data point for understanding the asset. For example, a screenshot that shows a login screen or an admin panel UI is a strong indicator of the asset's function (even if the text wasn't conclusive). While the labeling at this beta stage is mostly driven by metadata and textual analysis, having a screenshot means that if automated logic doesn't perfectly categorize an asset, an analyst can quickly glance at the image and understand what the asset is. Behind the scenes, all these inputs are combined to assign one or multiple labels to the asset. The system uses a rules-based approach (and will continue to get smarter over time) to match patterns or signatures with label categories. For example, if an asset's DNS name contains "api" and the HTTP response returns JSON, a rule might label it as an "API Endpoint." Similarly, a host identified to be running Jenkins (via tech fingerprinting of HTTP response) might get a label like "Jenkins CI" to denote it's a CI/CD service. Each label is essentially a quick descriptor that summarizes an aspect of the asset, allowing you to immediately understand its nature without deep manual investigation. ## Benefits of Automated Labeling Automated asset labeling brings several advantages to security professionals and engineers managing a large number of assets: * **Reduces Manual Effort:** One of the biggest benefits is cutting down the tedious work of labeling assets by hand. In the past, teams might maintain spreadsheets or use tagging systems to mark which assets are production, which are internal, which belong to a certain team, etc. ProjectDiscovery's automated approach does this heavy lifting for you. As soon as assets are discovered, the platform annotates them with relevant labels, sparing you from examining each asset individually and typing out tags. This automation frees up your time to focus on higher-value tasks like analyzing findings or improving security controls. * **Speeds Up Security Triage:** With assets automatically categorized, you can prioritize and triage security issues faster. When a new vulnerability or incident is reported, having labeled assets means you instantly know the context. For example, if an alert comes in for *api.test.example.com*, an "API" label and perhaps a "Staging" label on that asset will tell you it's a staging API server. You can then decide the urgency (maybe lower than a production issue) and the appropriate team to notify. Without having to dig for this information, response times improve. In short, labels act as immediate context clues that help you quickly determine the criticality of an asset and the impact of any associated vulnerabilities. * **Better Asset Management & Organization:** Asset labels make it much easier to organize and filter your asset inventory. You can group assets by their labels to get different views of your attack surface. For instance, you might filter to see all assets labeled "Production" to ensure you're focusing scans and monitoring on live customer-facing systems, or you might pull up all assets labeled "Login Portal" to review authentication points in your infrastructure. This capability turns a flat list of assets into a richly organized dataset that can be sliced and diced for various purposes. It enhances visibility across your environment – you can quickly answer questions like "How many external login pages do we have?" or "Which assets are running database services?" if such labels are applied. Ultimately, this leads to more structured and efficient asset management. * **Consistency and Scale:** Automated labeling applies the same criteria uniformly across all assets, ensuring consistent classification. Human tagging can be subjective – different team members might label similar assets differently or overlook some assets entirely. With ProjectDiscovery doing it automatically, every asset is evaluated with the same logic, and nothing gets skipped due to oversight. This consistency is especially important when you have hundreds or thousands of assets in dynamic cloud environments. The feature scales effortlessly – no matter how many assets you discover overnight, each will get labeled without adding to anyone's workload. As your attack surface grows, automated labeling keeps the context up-to-date continuously, which is crucial for maintaining an accurate asset inventory in fast-changing environments. In summary, automated asset labeling streamlines asset management by eliminating manual tagging drudgery, accelerating the interpretation of asset data, and bringing order and clarity to your inventory. It's an efficiency boost that also improves the quality of your security posture by ensuring you always know what each asset is and why it's there. # Asset Discovery and Exposure Management Source: https://docs.projectdiscovery.io/cloud/assets/overview Next-generation attack surface management and asset discovery platform Attack Surface Management (ASM) has evolved from basic asset enumeration into a sophisticated process that continuously discovers, classifies, and monitors all assets vulnerable to attack. Modern organizations face ever‐expanding digital footprints spanning traditional internet-facing systems, dynamic cloud environments, and complex distributed services. ProjectDiscovery redefines ASM by combining proven open‑source techniques with advanced cloud‑native capabilities. This unified platform delivers instant insights—through a search‑like experience and deep reconnaissance—ensuring comprehensive coverage and real‑time visibility into your entire infrastructure. In essence, it lets your security team see your organization's attack surface as an attacker would, leaving no blind spots. This document outlines the core workflows and architectural components of ProjectDiscovery's ASM and Exposure Management. It is designed to help new users quickly understand how the system works and to provide a structured, yet developer‑friendly, overview for security and engineering teams. *** ## Platform Architecture Our next‑generation asset discovery platform is built on a revolutionary three‑layer architecture developed through extensive collaboration with hundreds of security teams. Each layer plays a distinct role in mapping and monitoring your infrastructure. ### 1. External Discovery Layer * **Instant Enumeration:** Leveraging our enhanced Chaos database, this layer delivers immediate results through pre‑indexed data for hundreds of thousands of domains. * **Deep Reconnaissance:** Active reconnaissance methods (advanced DNS brute‑forcing, permutation analysis, certificate transparency log monitoring) supplement instant results. * **ASN Mapping:** Sophisticated ASN correlation (ASNMap) uncovers hidden relationships by mapping IP ranges associated with your organization. This network‑level insight expands your visibility beyond known domains. * **Third‑Party Data & Subsidiary Discovery:** Integration with external sources (e.g., Shodan, Censys, FOFA) and subsidiary detection mechanisms automatically identify related brands and assets—ensuring that acquired or lesser‑known entities are not overlooked. ### 2. Cloud Integration Layer * **Real‑Time Cloud Asset Discovery:** Our enhanced Cloudlist engine connects natively with AWS, Azure, GCP, and more, continuously monitoring your cloud footprint. * **Service & Configuration Monitoring:** Advanced heuristics identify exposed services and risky configurations in real‑time, while persistent API connections ensure your cloud inventory stays up‑to‑date. * **Cross‑Cloud Correlation:** Cloud‑based assets are linked with ASN data and external discoveries to provide a unified view of your overall attack surface. ### 3. Asset Management Layer * **Enrichment & Classification:** Raw asset data is transformed through multi‑stage analysis. Comprehensive DNS analysis, HTTP probing (with screenshots and technology fingerprinting), and certificate evaluation work together to create detailed asset profiles. * **Automated Labeling:** AI‑powered models automatically categorize and tag assets based on their characteristics, behavior patterns, and risk profiles. Users can also define custom labels and apply bulk labeling to further organize assets by environment, ownership, or risk. * **Graph‑Based Relationship Mapping:** Advanced mapping visualizes complex asset relationships and attack paths, providing actionable intelligence for prioritizing security efforts. *** ## Key Workflows & Features Automatically discover and track all external-facing and internal assets using integrated tools like Subfinder, Naabu, Httpx, and more Configure patterns to exclude specific targets from discovery using subdomains, IPs, or wildcard patterns Organize assets with AI-generated and custom labels for efficient management and prioritization Capture visual snapshots of web assets for quick identification of exposed interfaces Automatically map and manage assets across multiple subsidiaries and brands Native integration with major cloud providers for comprehensive asset discovery Seamless integration with Nuclei-powered scanning for comprehensive security assessment *** ## Best Practices & Next Steps * **Enable Continuous Scanning:** Schedule regular asset discovery and vulnerability scans to ensure your inventory remains current. * **Leverage Labels Effectively:** Develop a consistent labeling scheme that reflects your organizational structure (e.g. by environment, department, or risk level) to prioritize remediation efforts. * **Integrate with Your Workflow:** Set up integrations with alerting systems (Slack, Teams, email) and ticketing tools (Jira, GitHub) to automate notifications and track remediation. * **Review & Update Regularly:** Periodically audit your asset inventory to remove stale entries and adjust labels as your infrastructure evolves. * **Explore Advanced Features:** Once you're comfortable with the basics, dive into additional features such as customized filtering, dynamic grouping, and deeper cloud integrations to further refine your exposure management. *** By following this guide, new users can quickly grasp the full capabilities of ProjectDiscovery's ASM and Exposure Management. The integrated workflows—from asset discovery and enrichment to continuous monitoring and vulnerability assessment—provide a robust, real‑time view of your infrastructure, empowering your security team to proactively secure your attack surface. Enjoy the streamlined, automated approach to managing your organization's exposure with ProjectDiscovery! # Asset Screenshots Source: https://docs.projectdiscovery.io/cloud/assets/screenshots Visual catalog of your discovered assets for quick security assessment The Screenshots feature is currently in beta and operates asynchronously. After asset discovery, there may be a delay before screenshots become available as they are processed in the background. This current limitation is temporary while we work on infrastructure optimizations to make screenshot generation instant. We are actively working on: * Reducing screenshot generation time * Implementing real-time processing * Scaling our infrastructure to handle concurrent screenshot requests * Making the feature more widely available to all users During the beta period, you may experience longer wait times for screenshots to appear in your dashboard. We appreciate your patience as we enhance this feature to provide instant visual insights for all users. The *Screenshots* feature automatically captures and catalogs visual snapshots of web assets identified during your discovery process. In practice, this means that for each discovered web service, an image of its web page is saved for you to review. These screenshots provide a quick visual summary of what was found, allowing you to identify interesting or anomalous web pages at a glance. All captured images are organized alongside asset data, so security teams can easily browse them without manually visiting each site. **How this helps:** By seeing the actual rendered pages, you can spot login portals, dashboards, error pages, or other telling visuals immediately. This added context enriches your asset inventory beyond raw URLs and metadata, giving you an at-a-glance understanding of each asset's interface and content. ## How It Works (Technical Process) Under the hood, the screenshot feature uses a headless browser to load each web page and take a snapshot of it. When asset discovery with screenshots is initiated, the system will launch a browser engine (Chrome in headless mode) to fully render the target page (including HTML, CSS, and JavaScript) before capturing the image. Because of this rendering step, screenshot generation is **resource-intensive** and **time-consuming**. Each page needs to load as if you opened it in a real browser, which introduces processing delays. In the current beta implementation, screenshots are taken **asynchronously**. This means the initial asset discovery can complete and return results before all screenshots are finished. The images will continue to be captured in the background and will appear in your asset catalog once ready. As a result, you might notice a gap between discovering an asset and seeing its screenshot. This is normal in the beta – the feature prioritizes completing the discovery process first, then works on rendering pages for snapshots. ## Why Use Screenshots? Traditionally, after discovering new web assets, security engineers would **manually inspect** each site to understand what it is. This might involve copying URLs into a browser or using separate tools to capture site images. For large numbers of assets, that manual approach is tedious and time‑consuming. Important details could be missed if an analyst doesn't have time to check every single site. The screenshots feature automates this **visual assessment** step. Instead of manually visiting dozens or hundreds of websites, the system automatically provides you with a gallery of each site's front page. This saves considerable time and effort – without automation, teams often had to write custom scripts (for example, using Selenium to take browser snapshots) or even rerun their discovery with a separate screenshot tool just to capture images. Now, that process is integrated: as soon as an asset is found, a screenshot is queued up for it. Security teams can quickly scroll through the captured images to triage assets, prioritize investigation, and spot anything visually unusual or interesting. In essence, **Screenshots turn a once-manual, one-by-one review into an automated, at-scale process**, letting you cover more ground faster. **Use case example:** If your discovery process finds an unknown subdomain hosting a login page, the screenshot will show you the login form and branding. This immediate context might tell you that the site is an admin portal, which is valuable information for risk assessment. Without the screenshot, you might have overlooked that subdomain or delayed investigating it until you could manually check it. By automating this, the feature ensures no discovered web asset goes visually unchecked. # Subsidiary & Multi-Organization Management Source: https://docs.projectdiscovery.io/cloud/assets/subsidiary Discover and manage assets across multiple organizations, subsidiaries, and brands Need advanced workflows or custom subsidiary management? Our team can help set up enterprise-grade configurations tailored to your infrastructure. [Talk to our team](https://projectdiscovery.io/request-demo) to discuss your specific requirements. Modern enterprises frequently have complex infrastructures spread across many domains and business units. ProjectDiscovery's platform is designed to give security teams **instant visibility into the entire organizational attack surface**, including assets belonging to subsidiaries, acquired companies, and separate brands. It does so by automating asset discovery and correlation on a global scale. The platform acts as a centralized inventory where all web properties, cloud resources, and external facing systems tied to an organization are cataloged together, regardless of which subsidiary or team they belong to. ProjectDiscovery built its cloud platform with **end-to-end exposure management workflows** that continuously discover assets and monitor them in real-time. This means as your organization grows – launching new websites, spinning up cloud services, or acquiring companies – the platform automatically updates your asset inventory and keeps track of new potential entry points. In short, ProjectDiscovery provides a *"single pane of glass"* for enterprise security teams to oversee multi-organization infrastructures. ## Challenges in Traditional Subsidiary Asset Discovery Tracking assets across multiple organizations or subsidiaries is notoriously difficult when done manually. Security teams traditionally had to compile lists of subsidiary domains and networks from internal knowledge or public records, then run separate scans for each – a time-consuming and error-prone process. Some common challenges include: * **Incomplete Visibility:** Large organizations might have dozens of subsidiaries or brand domains, and each may host numerous applications. Manually mapping all these entities is a huge challenge. In practice, many enterprises have "hundreds or even thousands of related entities," making it *"difficult to get a clear picture of their full attack surface"*. Important assets can be overlooked simply because they were not on the main corporate domain. * **Constant Change:** Mergers, acquisitions, and divestitures mean the set of assets is constantly evolving. Without continuous updates, asset inventories become outdated quickly. IP addresses and domains can change ownership or get spun up and down rapidly in cloud environments. Keeping track of these changes manually is untenable. * **Fragmented Data Sources:** Information about subsidiaries is often scattered (e.g. in financial databases, press releases, WHOIS records). As a result, mapping out which domains or systems are owned by your company (versus third parties) can require extensive research. This fragmentation leads to **blind spots** in security monitoring. * **Risk of Unknown Assets:** Perhaps the biggest risk is that **unknown or unmanaged assets can lead to security incidents**. If a security team is only monitoring the primary organization's domains, a forgotten website under a subsidiary could become an easy target. As one security engineer described, without a centralized view "*new assets could pop up without our knowledge, creating potential vulnerabilities like subdomain takeovers*". In other words, attackers might exploit an obscure subsidiary's forgotten cloud bucket or an old acquisition's server if the defenders aren't even aware it exists. These challenges mean that traditional approaches (spreadsheets of subsidiaries, manual scans, etc.) often fail to provide complete coverage. Security teams end up reactive – finding out about a subsidiary's exposure only after an incident or external report. Clearly, a more automated, scalable solution is needed for subsidiary and multi-organization asset management. ## How ProjectDiscovery Solves This Problem ProjectDiscovery's platform introduces automated features that **eliminate the manual legwork** of subsidiary asset discovery. It leverages external data and intelligent correlation to map out an enterprise's entire digital footprint across all related organizations, with minimal user input. Key capabilities include: * **Automated Subsidiary Correlation:** ProjectDiscovery integrates with the Crunchbase API to automatically identify which companies and domains are associated with your organization. As soon as you onboard, the platform pulls in known subsidiaries and related entities from Crunchbase's extensive corporate database. This means security teams *immediately* see a list of subsidiaries and their known domains without having to manually research corporate filings or news articles. By using this external intelligence, ProjectDiscovery can **map subsidiaries to assets** and help track associated assets across \[your] entire corporate structure. * **Seamless Onboarding of Subsidiary Assets:** The platform presents this extended view during onboarding – giving users an instant snapshot of their organization's broad footprint as they set up their account. Instead of starting with a blank slate, an enterprise user logging into ProjectDiscovery for the first time might immediately see that the platform has identified, for example, *"SubsidiaryX.com, SubsidiaryY.net, and BrandZ.com"* as belonging to their company. This **jump-starts the asset inventory** by automatically including the web properties of all child organizations. Such visibility, right at onboarding, ensures no major branch of the business is initially overlooked. * **Recognition of Brands and Owned Domains:** Subsidiary discovery in ProjectDiscovery isn't limited to exact company names – it also helps surface related domains or brands. For example, if your organization owns multiple product brands each with their own website, the platform can recognize those as part of your attack surface. It correlates various clues (DNS records, SSL certificates, WHOIS info, etc.) to cluster assets by ownership. As a result, security teams get a unified view of everything "owned" by the broader organization, even if operated under different names. * **Continuous Enrichment and Updates:** ProjectDiscovery's asset correlation is not a one-time static pull. It is continuously being enhanced. Upcoming improvements will use **reverse WHOIS lookups** to find additional owned domains and associated entities that might not be obvious from corporate listings. This will further expand coverage by catching assets that share registration details or contact emails with the organization. The platform is also opening up these discovery capabilities via API for the community, so its subsidiary detection engine will keep getting smarter over time. For the security team, this means the asset inventory grows and updates automatically as new information surfaces – without manual effort. By automating subsidiary and multi-organization asset discovery, ProjectDiscovery **saves countless hours** of manual mapping and drastically reduces the chances of missing a part of your attack surface. Security teams no longer need to maintain separate inventories or perform ad-hoc research whenever the company expands; the platform handles it for them in the background. All assets across the parent company and its subsidiaries funnel into one consolidated inventory for monitoring. # Credential Monitoring Source: https://docs.projectdiscovery.io/cloud/credential-monitoring Detect and respond to compromised credentials from dark web sources and infostealer logs **Beta**: Credential Monitoring is currently in beta. We're continuously expanding monitoring methods and adding new features to enhance your security posture. ## What is Credential Monitoring? Compromised credentials are one of the weakest security points and the easiest attack vector for cybercriminals. ProjectDiscovery's Credential Monitoring is a continuous threat intelligence system that detects compromised credentials from **malware stealer logs**, enabling security teams to prevent account takeovers. By continuously scanning millions of exposed credentials, the platform identifies actual credential exposures that pose immediate risk to your organization, employees, and customers. We specifically focus on malware stealer logs as these have proven to be the most impactful security vulnerabilities. As we evolve this beta product, we'll be expanding monitoring across GitHub repositories, crawled web pages, and other sources to detect exposed tokens, API keys, and environment secrets. ProjectDiscovery Credential Monitoring Dashboard Start monitoring your credentials for free and see exposed credentials in real-time ## Feature access by plan | Feature | Free Users | Free Business Domain Users | Enterprise Users | | :------------------------------------- | ---------- | ----------------------------- | ---------------- | | Personal email exposures | ✓ | ✓ | ✓ | | Organization-wide credential exposures | - | ✓ (Requires DNS verification) | ✓ | | View employee passwords | - | ✓ (Requires DNS verification) | ✓ | | Export data (JSON/CSV) | ✓ | ✓ | ✓ | | API access | ✓ | ✓ | ✓ | | Multi-domain monitoring | - | - | ✓ | | Priority support | - | - | ✓ | **Access Control**: Viewing employee passwords is only accessible to users with **Owner** or **Admin** account types within your organization. ## How It Works Our credential monitoring system: 1. **Collects** malware-stolen credential data from publicly accessible sources including: * Telegram channels and groups where malware logs are shared * Leaks forums and websites * Public repositories where malware logs are posted 2. **Processes** and filters the data to: * Parse credential pairs (email:password combinations) from malware logs * Extract domain and email information * Filter for credentials matching your monitored domains * Remove invalid formatted data 3. **Alerts** your team when credentials matching your monitored domains are found All credential data comes from **publicly accessible sources** on the internet where malware logs are shared. We do not perform any unauthorized access or hacking to obtain this information. **Important**: ProjectDiscovery does not validate, test, or attempt to login with any collected credential information. We only collect and filter the data for formatting validity - we do not verify if credentials are active or functional. ## Leak Classification and Mapping ProjectDiscovery's Credential Monitoring categorizes discovered credentials into three distinct types based on their relationship to your organization. Understanding these categories helps prioritize remediation efforts and assess security impact across different stakeholder groups. Credential Monitoring Data Classification ### Visual Data Flow The following diagram illustrates how credential data flows through our classification system: ```mermaid flowchart TD A["🔍 Malware Log Data
Collection"] --> B["📊 Data Processing
& Filtering"] B --> C{{"🏷️ Leak Classification
Engine"}} C --> D["👤 My Leaks

• Credentials associated with
your logged-in email
• Personal account exposures
• Direct user impact"] C --> E["👥 Employee Leaks

• Login emails containing
Hooli domain
• Company email addresses
• Internal & external services
• Workforce security impact"] C --> F["🏢 Customer/User Leaks

• Login URLs containing
Hooli domain
• External email addresses
• Customer account exposures
• External customer impact

⚠️ Excludes employee emails"] D --> G["📧 Email Notifications
Immediate alerts"] E --> H["🚨 Security Dashboard
Organization view"] F --> I["📊 Customer Risk
Assessment"] style D fill:#e1f5fe,stroke:#333,stroke-width:2px,color:#000 style E fill:#f3e5f5,stroke:#333,stroke-width:2px,color:#000 style F fill:#fff3e0,stroke:#333,stroke-width:2px,color:#000 style C fill:#f1f8e9,stroke:#333,stroke-width:2px,color:#000 style A fill:#ffffff,stroke:#333,stroke-width:2px,color:#000 style B fill:#ffffff,stroke:#333,stroke-width:2px,color:#000 style G fill:#e8f5e8,stroke:#333,stroke-width:2px,color:#000 style H fill:#ffeaa7,stroke:#333,stroke-width:2px,color:#000 style I fill:#fab1a0,stroke:#333,stroke-width:2px,color:#000 linkStyle default stroke:#333,stroke-width:2px linkStyle 0 stroke:#333,stroke-width:2px linkStyle 1 stroke:#333,stroke-width:2px linkStyle 2 stroke:#333,stroke-width:2px linkStyle 3 stroke:#333,stroke-width:2px linkStyle 4 stroke:#333,stroke-width:2px linkStyle 5 stroke:#333,stroke-width:2px linkStyle 6 stroke:#333,stroke-width:2px linkStyle 7 stroke:#333,stroke-width:2px ``` ### Leak Categories Explained #### 👤 My Leaks **Personal Account Exposures** * **Definition**: All credential exposures directly associated with your logged-in email address in the ProjectDiscovery platform * **Scope**: Personal accounts and services where you used your email for registration * **Impact**: Direct personal security risk requiring immediate attention * **Example**: If you're logged in as `admin@hooli.com`, this shows all malware logs containing `admin@hooli.com` credentials * **Access**: Available to all user tiers without additional verification #### 👥 Employee Leaks **Organizational Workforce Exposures** * **Definition**: All credential exposures where the **login email** contains your organization's domain, regardless of the service/platform where it was used * **Scope**: Current and former employees using company email addresses on ANY platform or service * **Impact**: Internal security risk affecting both organizational assets and external vendor access * **Examples**: * **Internal Company Services**: * `john.doe@hooli.com` → `mail.hooli.com` (company email) * `sarah.smith@hooli.com` → `intranet.hooli.com` (internal systems) * **External/3rd Party Services**: * `john.doe@hooli.com` → `github.com` (code repositories) * `sarah.smith@hooli.com` → `aws.amazon.com` (cloud services) * `support@hooli.com` → `slack.com` (communication tools) * `admin@hooli.com` → `dropbox.com` (file sharing) * **Access**: Requires domain verification for Business Domain Users; automatically available for Enterprise users * **Privacy**: Only visible to Owner and Admin account types #### 🏢 Customer/User Leaks **External Customer Exposures** * **Definition**: All credential exposures where the **login URL/domain** contains your company domain, but the email address does NOT belong to employees * **Scope**: Your customers and users who have accounts on your services or platforms * **Impact**: External customer security risk affecting user trust and platform security * **Examples**: * `user123@gmail.com` with login URL containing `hooli.com` * `customer@yahoo.com` accessing services at `app.hooli.com` * `buyer@outlook.com` with stored passwords for `shop.hooli.com` * **Exclusions**: Does not include employee emails (those are classified as Employee Leaks) * **Access**: Available to verified Business Domain Users and Enterprise customers * **Privacy**: Email addresses shown, but passwords are never displayed to protect customer privacy ### Key Classification Distinction **Critical Understanding**: The fundamental difference between Employee and Customer leaks: * **👥 Employee Leaks**: Determined by the **EMAIL ADDRESS** - any leak where the email contains your company domain, regardless of what service it was used on * `john@hooli.com` used on GitHub ✓ Employee Leak * `sarah@hooli.com` used on AWS ✓ Employee Leak * `admin@hooli.com` used on Dropbox ✓ Employee Leak * **🏢 Customer Leaks**: Determined by the **SERVICE/LOGIN URL** - any leak where external emails were used on your company's services * `user@gmail.com` used on `app.hooli.com` ✓ Customer Leak * `customer@yahoo.com` used on `shop.hooli.com` ✓ Customer Leak ### Priority Matrix for Remediation | Leak Type | Priority | Actions Required | Notifications | | ------------------ | ----------- | ---------------------------------------------------------------- | ------------------------ | | **My Leaks** | Critical | Immediate password reset, enable MFA | Real-time email alerts | | **Employee Leaks** | High | Force password resets, audit 3rd party access, security training | Dashboard alerts + email | | **Customer Leaks** | Medium-High | Customer notification, password reset prompts | Dashboard alerts + email | **Pro Tip**: Use the leak classification to implement different response workflows. Personal and employee leaks require immediate internal action (including auditing 3rd party service access), while customer leaks may need customer communication and platform-level security enhancements. ### Data Accuracy and Classification Logic Our classification system uses advanced pattern matching and domain analysis to ensure accurate categorization: * **Email Domain Matching**: Sophisticated regex patterns identify company domains in email addresses * **URL Domain Extraction**: Advanced parsing extracts target domains from login URLs and service endpoints * **Duplicate Prevention**: Cross-category filtering ensures employee emails don't appear in customer leak categories * **False Positive Reduction**: Multiple validation layers minimize misclassification **Important**: Customer leak data shows email addresses for identification purposes but never displays actual passwords to maintain customer privacy and comply with data protection standards. ## Understanding Malware-Based Credential Theft ### How Malware Steals Credentials Malware (information stealers) typically harvest credentials from: * **Browser saved passwords** - Chrome, Firefox, Edge, Safari stored passwords * **Application credentials** - Email clients, FTP clients, messaging apps * **System credential stores** - Windows Credential Manager, macOS Keychain * **Browser cookies and sessions** - Active login sessions * **Cryptocurrency wallets** - Wallet files and recovery phrases * **SSH/RDP credentials** - Stored connection credentials ### Malware Log Structure When malware infects a system, it creates "logs" containing stolen data that may include: * Victim's system information (OS, location, etc.) * Stolen passwords organized by application/browser * Cookies and session tokens * Cryptocurrency wallet data * Screenshots and system files These logs are then shared or sold on underground platforms, which is where we collect them from publicly accessible sources. ## Why Some Findings Lack Detailed Metadata **Important**: Not all credential exposures include complete metadata such as specific malware names, infection dates, or victim details. This happens because: * **Data Processing**: Threat actors often strip identifying information before sharing logs * **Source Aggregation**: Logs may pass through multiple hands before becoming publicly available * **Privacy Protection**: Some sources anonymize victim information * **Technical Limitations**: Malware logs don't always contain complete metadata ### Common Metadata Available When present, malware logs may include: * **Collection date** - When the malware harvested the credentials * **Geographic location** - Country/region of infected system * **System information** - OS version, browser versions * **Malware family** - Type of stealer malware used (when identifiable) ### When Metadata is Missing or "Blank" If findings show blank or missing source information: * **The credentials are still valid threats** - treat them seriously * **Source anonymization** - Information may have been stripped for privacy * **Multiple aggregation** - Logs may have passed through several sources * **Technical parsing issues** - Some log formats don't parse completely ## What Actions Should You Take? When malware-exposed credentials are identified for your domain: ### Recommended Actions 1. **Force password resets** for all affected email addresses 2. **Enable multi-factor authentication** (MFA) on all affected accounts 3. **Disable compromised accounts** temporarily and review recent activity 4. **Rotate associated API keys** and service account passwords 5. **Scan endpoints** for malware infections 6. **Deploy endpoint protection** and implement password managers 7. **Conduct security training** to prevent future credential theft ### Handling Cases with Missing Source Details When leak sources are blank or incomplete: * **Prioritize these equally** - assume they represent active threats * **Focus on remediation** rather than source investigation * **Monitor affected accounts closely** for suspicious activity * **Treat as confirmed malware exposure** and follow full remediation steps ## API Integration Access credential monitoring data programmatically: * **Domain Leaks**: `GET /v1/leaks/domain` - Get all malware-exposed credentials for your monitored domains * **Email Leaks**: `GET /v1/leaks/email` - Get credential exposures for specific email addresses * **Customer Leaks**: `GET /v1/leaks/domain/customers` - Get customer email addresses (returns only email addresses of customers, not full credential exposures) For detailed API documentation and usage examples, see the [API Reference](/api-reference/leaks/get-domain-leaks). Integrate these API endpoints with your security tools to automatically trigger password resets and security reviews when new malware-based exposures are detected. ## Common User Questions **Q: Can all team members see our organization's exposed credentials?** A: No, credential visibility is restricted to Owner and Admin accounts only. This ensures sensitive breach data is only accessible to authorized personnel responsible for security. **Q: Why can't I see employee passwords?** A: Business Domain Users can view employee passwords data after DNS verification. Only Enterprise customers can monitor multiple domains. Customer passwords are never displayed to any user tier to protect customer privacy. **Q: How do I view my personal leaked credentials?** A: You can automatically view all leaked credentials associated with the email address you used to sign up. Simply navigate to "My Leaks" to see any exposures linked to your personal email account - no additional verification required. **Q: What breach notifications will I receive?** A: All users receive email notifications when new breaches affecting their personal email are discovered. Business Domain Users (after DNS verification) and Enterprise customers also receive notifications for employee credential exposures across their organization. **Q: How do I monitor my company's domain?** A: Business domain users must verify their domain via DNS TXT record to access organization-wide breach data. Without verification, you'll only see personal email exposures. Enterprise users can verify and monitor multiple domains. **Q: Are you monitoring our employees?** A: No, we monitor internet-wide credential leaks and aggregate them into a comprehensive database. When you verify your domain, we filter this existing database to show only exposures related to your organization. We're not actively monitoring your employees - we're helping you discover credentials that are already publicly exposed across the internet. **Q: How accurate is the breach data?** A: We aggregate data from multiple verified sources. While false positives are possible, always verify before taking action. **Q: How current is the data?** A: We continuously monitor sources 24/7. New breaches typically appear within hours to days of being discovered. **Q: How did these credentials get stolen?** A: The credentials were harvested by malware (information stealers) that infected user devices. These malware strains steal saved passwords from browsers and applications, then upload the data to command-and-control servers where it eventually gets shared publicly. **Q: Are these from phishing attacks or data breaches?** A: No. Our credential monitoring specifically focuses on **malware-based credential theft only**. We do not track credentials from phishing campaigns or corporate data breaches. **Q: Why don't all findings show malware names?** A: Malware logs don't always contain identifying information about the specific malware strain. Additionally, logs are often processed and anonymized before being shared publicly, which strips technical details. **Q: Why are some metadata fields showing as blank?** A: This indicates that metadata fields (such as malware family name, collection date, geographic location, or system information) were not available in the original malware log or were stripped during processing. The credentials are still legitimate exposures that require immediate action. **Q: Do you test if these credentials actually work?** A: No. ProjectDiscovery does not validate, test, or attempt to login with any collected credential information. We only collect and present the data as found in publicly accessible malware logs. # AI Assistance Source: https://docs.projectdiscovery.io/cloud/editor/ai Review details on using AI to help generate templates for Nuclei and ProjectDiscovery AI Prompt [The Template Editor](https://cloud.projectdiscovery.io/) has AI to generate templates for vulnerability reports. This document helps to guide you through the process, offering usagwe tips and examples. ## Overview Powered by ProjectDiscovery's deep library of public Nuclei templates and a rich CVE data set, the AI understands a broad array of security vulnerabilities. First, the system interprets the user's prompt to identify a specific vulnerability. Then, it generates a template based on the steps required to reproduce the vulnerability along with all the necessary meta information to reproduce and remediate. ## Initial Setup Kick start your AI Assistance experience with these steps: 1. **Provide Detailed Information**: Construct comprehensive Proof of Concepts (PoCs) for vulnerabilities like Cross-Site Scripting (XSS), and others. 2. **Understand the Template Format**: Get to grips with the format to appropriately handle and modify the generated template. 3. **Validation and Linting**: Use the integrated linter to guarantee the template's validity. 4. **Test the Template**: Evaluate the template against a test target ensuring its accuracy. ## Best Practices * **Precision Matters**: Detailed prompts yield superior templates. * **Review and Validate**: Consistently check matchers' accuracy. * **Template Verification**: Validate the template on known vulnerable targets before deployment. ## Example Prompts The following examples demonstrate different vulnerabilities and the corresponding Prompt. Open redirect vulnerability identified in a web application. Here's the PoC: HTTP Request: ``` GET /redirect?url=http://malicious.com HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 ``` HTTP Response: ``` HTTP/1.1 302 Found Location: http://malicious.com Content-Length: 0 Server: Apache ``` The application redirects the user to the URL specified in the url parameter, leading to an open redirect vulnerability. SQL Injection vulnerability in a login form. Here's the PoC: HTTP Request: ``` POST /login HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded username=admin&password=' OR '1'='1 ``` HTTP Response: ``` HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Welcome back, admin

... ``` The application improperly handles user input in the password field, leading to an SQL Injection vulnerability.
Business Logic vulnerability in a web application's shopping cart function allows for negative quantities, leading to credit. Here's the PoC: HTTP Request: ``` POST /add-to-cart HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded product_id=1001&quantity=-1 ``` HTTP Response: ``` HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Product added to cart. Current balance: -$19.99

... ``` The application fails to validate the quantity parameter, resulting in a Business Logic vulnerability.
Server-side Template Injection (SSTI) vulnerability through a web application's custom greeting card function. Here's the PoC: ``` HTTP Request: POST /create-card HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Content-Type: application/x-www-form-urlencoded message={{7*7}} ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Your card: 49

... ``` The application processes the message parameter as a template, leading to an SSTI vulnerability.
Insecure Direct Object Reference (IDOR) vulnerability discovered in a website's user profile page. Here's the PoC: ``` HTTP Request: GET /profile?id=2 HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Cookie: session=abcd1234 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache ...

Welcome, otheruser

... ``` The application exposes sensitive information of a user (ID: 2) who is not the authenticated user (session: abcd1234), leading to an IDOR vulnerability.
Path Traversal vulnerability identified in a web application's file download function. Here's the PoC: ``` HTTP Request: GET /download?file=../../etc/passwd HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/plain Content-Length: 1827 Server: Apache root:x:0:0:root:/root:/bin/bash ``` The application fetches the file specified in the file parameter from the server file system, leading to a Path Traversal vulnerability. Business logic vulnerability in a web application's VIP subscription function allows users to extend the trial period indefinitely. Here's the PoC: ``` HTTP Request: POST /extend-trial HTTP/1.1 Host: example.com User-Agent: Mozilla/5.0 Cookie: session=abcd1234 ``` ``` HTTP Response: HTTP/1.1 200 OK Content-Type: text/html Content-Length: 1337 Server: Apache

Your VIP trial period has been extended by 7 days.

``` The application does not limit the number of times the trial period can be extended, leading to a business logic vulnerability.
Each of these examples provides HTTP Requests and Responses to illustrate the vulnerabilities. ## Limitations Please note that the current AI is trained primarily on HTTP data. Template generation for non-HTTP protocols is not supported at this time. Support for additional protocols is under development and will be available soon. # Templates & Editor FAQ Source: https://docs.projectdiscovery.io/cloud/editor/faq Answers to common questions about Nuclei templates and our cloud platform template editor Nuclei [templates](http://github.com/projectdiscovery/nuclei-templates) are the core of the Nuclei project and ProjectDiscovery Cloud Platform. The templates contain the actual logic that is executed in order to detect various vulnerabilities. The ProjectDiscovery template library contains **several thousand** ready-to-use **[community-contributed](https://github.com/projectdiscovery/nuclei-templates/graphs/contributors)** vulnerability templates. We are continuously working with our open source community to update and add templates as vulnerabilities are discovered. We maintain a [template guide](/templates/introduction/) for writing new and custom Nuclei templates. ProjectDiscovery Cloud Platform also provides AI support to assist in writing and testing custom templates. - Check out our documentation on the [Templates Editor](/cloud/editor/ai) for more information. Performing security assessment of an application is time-consuming. It's always better and time-saving to automate steps whenever possible. Once you've found a security vulnerability, you can prepare a Nuclei template by defining the required HTTP request to reproduce the issue, and test the same vulnerability across multiple hosts with ease. It's worth mentioning ==you write the template once and use it forever==, as you don't need to manually test that specific vulnerability any longer. Here are few examples from the community making use of templates to automate the security findings: * [https://dhiyaneshgeek.github.io/web/security/2021/02/19/exploiting-out-of-band-xxe/](https://dhiyaneshgeek.github.io/web/security/2021/02/19/exploiting-out-of-band-xxe/) * [https://blog.melbadry9.xyz/fuzzing/nuclei-cache-poisoning](https://blog.melbadry9.xyz/fuzzing/nuclei-cache-poisoning) * [https://blog.melbadry9.xyz/dangling-dns/xyz-services/ddns-worksites](https://blog.melbadry9.xyz/dangling-dns/xyz-services/ddns-worksites) * [https://blog.melbadry9.xyz/dangling-dns/aws/ddns-ec2-current-state](https://blog.melbadry9.xyz/dangling-dns/aws/ddns-ec2-current-state) * [https://projectdiscovery.io/blog/if-youre-not-writing-custom-nuclei-templates-youre-missing-out](https://projectdiscovery.io/blog/if-youre-not-writing-custom-nuclei-templates-youre-missing-out) * [https://projectdiscovery.io/blog/the-power-of-nuclei-templates-a-universal-language-of-vulnerabilities](https://projectdiscovery.io/blog/the-power-of-nuclei-templates-a-universal-language-of-vulnerabilities) Nuclei templates are selected as part of any scans you create. You can select pre-configured groups of templates, individual templates, or add your own custom templates as part of your scan configuration. * Check out [the scanning documentation]('/cloud/scanning/overview') to learn more. You are always welcome to share your templates with the community. You can either open a [GitHub issue](https://github.com/projectdiscovery/nuclei-templates/issues/new?assignees=\&labels=nuclei-template\&template=submit-template.md\&title=%5Bnuclei-template%5D+template-name) with the template details or open a GitHub [pull request](https://github.com/projectdiscovery/nuclei-templates/pulls) with your Nuclei templates. If you don't have a GitHub account, you can also make use of the [discord server](https://discord.gg/projectdiscovery) to share the template with us. You own any templates generated by the AI through the Template Editor. They are your property, and you are granted a perpetual license to use and modify them as you see fit. The Template Editor feature in PDCP uses OpenAI. Yes, prompts are stored as part of the generated template metadata. This data is deleted as soon as the template or the user are deleted. The accuracy of the generated templates is primarily dependent on the detail and specificity of the input you provide. The more detailed information you supply, the better the AI can understand the context and create an accurate template. However, as with any AI tool, it is highly recommended to review, validate, and test any generated templates before using them in a live environment. No, AI does not use the templates you generate for further training or improvement of the AI model. The system only uses public templates and CVE data for training, ensuring your unique templates remain confidential. # Template Editor Overview Source: https://docs.projectdiscovery.io/cloud/editor/overview Learn more about using the Nuclei Templates Editor For more in-depth information about Nuclei templates, including details on template structure and supported protocols [check out](/templates/introduction). [The Template Editor](https://cloud.projectdiscovery.io/public/public-template) is a multi-functional cloud-hosted tool designed for creating, running, and sharing templates (Nuclei and ProjectDiscovery). It's packed with helpful features for individual and professional users seeking to manage and execute templates. Templates Editor ## Template Compatibility In addition to the Template Editor, our cloud platform supports any templates compatible with [Nuclei](nuclei/overview). These templates are exactly the same powerful YAML format supported in open source. Take a look at our [Templates](/Templates/introduction) documentation for a wealth of resources available around template design, structure, and how they can be customized to meet an enormous range of use cases. As always, if you have questions [we're here to help](/help/home). ## Features Current and upcoming features: | Feature | Description and Use | Availability | | -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ | | **Editor** | Experience something akin to using VS Code with our integrated editor, built on top of Monaco. This feature allows easy writing and modification of Nuclei Templates. | Free | | **Optimizer** | Leverage the in-built TemplateMan API to automatically lint, format, validate, and enhance your Nuclei Templates. | Free | | **Scan (URL)** | Run your templates on a targeted URL to check their validity. | Free \* | | **Debugger** | Utilize the in-built debugging function that displays requests and responses of your template scans, aiding troubleshooting and understanding template behavior. | Free | | **Cloud Storage** | Store and access your Nuclei Templates securely anytime, anywhere using your account. | Free | | **Sharing** | Share your templates for better collaboration by generating untraceable unique links. | Free | | **AI Assistance** | Employ AI to craft Nuclei Templates based on the context of specified vulnerabilities. This feature simplifies template creation and tailors them to minimize the time required for creation. | Free \* | | **Scan (LIST, CIDR, ASN)** | In the professional version, run scans on target lists, network ranges (CIDR), AS numbers (ASN). | Teams | | **REST API** | In the professional version, fetch templates, call the AI, and perform scans remotely using APIs. | Teams | | **PDCP Sync** | Sync your generated templates with our cloud platform for easy access and management, available in the professional version. | Teams | ## Free Feature Limitations Some features available within the free tier have usage caps in place: * **Scan (URL):** You're allowed up to **100** scans daily. * **AI Assistance:** Up to **10** queries can be made each day. The limitations, reset daily, ensure system integrity and availability while providing access to key functions. ## How to Get Started Begin by ensuring you have an account. If not, sign up on [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io/sign-up) and follow the steps below: 1. Log in to your account at [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io). 2. Click on the "**Create new template**" button to open up a fresh editor. 3. Write and modify your template. The editor includes tools like syntax highlighting, snippet suggestions, and other features to simplify the process. 4. After writing your template, input your testing target and click the "**Scan**" button to authenticate your template's accuracy.