Your saved dynamic groups will appear in the same [list as your assets](https://cloud.projectdiscovery.io/assets). From here you can:
* **Access groups:** Click any group name to instantly view that filtered subset
* **Edit groups:** Hover over a group name and click the edit icon to modify the filters
* **Delete groups:** Remove groups that are no longer needed via the group settings menu
* **Share groups:** Generate shareable links for specific groups (Enterprise plan only)
## Use Cases and Best Practices
Dynamic asset groups offer versatile applications across your organization's security workflows. Create team-specific views (DevOps focusing on cloud technologies, Security teams monitoring vulnerabilities, Compliance teams tracking regulated systems), environment-specific groups (production, development, third-party integrations), and security-priority filters (critical infrastructure, public-facing systems, legacy technologies). For optimal results, maintain descriptive naming conventions, document each group's purpose, regularly review and update as infrastructure evolves, limit group quantity to maintain focus, and combine with custom labels for more powerful filtering. This approach streamlines asset management while providing targeted visibility where it matters most.
## Limitations
* Dynamic groups cannot be targeted for independent rescans
* The results in a dynamic group will always reflect the most recent state of the parent discovery
* Filter conditions apply only to discovered attributes - custom data cannot be used for filtering
### Visual Data Flow
The following diagram illustrates how credential data flows through our classification system:
```mermaid
flowchart TD
A["🔍 Malware Log Data
[The Template Editor](https://cloud.projectdiscovery.io/) has AI to generate templates for vulnerability reports. This document helps to guide you through the process, offering usagwe tips and examples.
## Overview
Powered by ProjectDiscovery's deep library of public Nuclei templates and a rich CVE data set, the AI understands a broad array of security vulnerabilities. First, the system interprets the user's prompt to identify a specific vulnerability. Then, it generates a template based on the steps required to reproduce the vulnerability along with all the necessary meta information to reproduce and remediate.
## Initial Setup
Kick start your AI Assistance experience with these steps:
1. **Provide Detailed Information**: Construct comprehensive Proof of Concepts (PoCs) for vulnerabilities like Cross-Site Scripting (XSS), and others.
2. **Understand the Template Format**: Get to grips with the format to appropriately handle and modify the generated template.
3. **Validation and Linting**: Use the integrated linter to guarantee the template's validity.
4. **Test the Template**: Evaluate the template against a test target ensuring its accuracy.
## Best Practices
* **Precision Matters**: Detailed prompts yield superior templates.
* **Review and Validate**: Consistently check matchers' accuracy.
* **Template Verification**: Validate the template on known vulnerable targets before deployment.
## Example Prompts
The following examples demonstrate different vulnerabilities and the corresponding Prompt.
Welcome back, admin
... ``` The application improperly handles user input in the password field, leading to an SQL Injection vulnerability.Product added to cart. Current balance: -$19.99
... ``` The application fails to validate the quantity parameter, resulting in a Business Logic vulnerability.Your card: 49
... ``` The application processes the message parameter as a template, leading to an SSTI vulnerability.Welcome, otheruser
... ``` The application exposes sensitive information of a user (ID: 2) who is not the authenticated user (session: abcd1234), leading to an IDOR vulnerability.Your VIP trial period has been extended by 7 days.
``` The application does not limit the number of times the trial period can be extended, leading to a business logic vulnerability.
## Template Compatibility
In addition to the Template Editor, our cloud platform supports any templates compatible with [Nuclei](nuclei/overview). These templates are exactly the same powerful YAML format supported in open source.
Take a look at our [Templates](/Templates/introduction) documentation for a wealth of resources available around template design, structure, and how they can be customized to meet an enormous range of use cases. As always, if you have questions [we're here to help](/help/home).
## Features
Current and upcoming features:
| Feature | Description and Use | Availability |
| -------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------ |
| **Editor** | Experience something akin to using VS Code with our integrated editor, built on top of Monaco. This feature allows easy writing and modification of Nuclei Templates. | Free |
| **Optimizer** | Leverage the in-built TemplateMan API to automatically lint, format, validate, and enhance your Nuclei Templates. | Free |
| **Scan (URL)** | Run your templates on a targeted URL to check their validity. | Free \* |
| **Debugger** | Utilize the in-built debugging function that displays requests and responses of your template scans, aiding troubleshooting and understanding template behavior. | Free |
| **Cloud Storage** | Store and access your Nuclei Templates securely anytime, anywhere using your account. | Free |
| **Sharing** | Share your templates for better collaboration by generating untraceable unique links. | Free |
| **AI Assistance** | Employ AI to craft Nuclei Templates based on the context of specified vulnerabilities. This feature simplifies template creation and tailors them to minimize the time required for creation. | Free \* |
| **Scan (LIST, CIDR, ASN)** | In the professional version, run scans on target lists, network ranges (CIDR), AS numbers (ASN). | Teams |
| **REST API** | In the professional version, fetch templates, call the AI, and perform scans remotely using APIs. | Teams |
| **PDCP Sync** | Sync your generated templates with our cloud platform for easy access and management, available in the professional version. | Teams |
## Free Feature Limitations
Some features available within the free tier have usage caps in place:
* **Scan (URL):** You're allowed up to **100** scans daily.
* **AI Assistance:** Up to **10** queries can be made each day.
The limitations, reset daily, ensure system integrity and availability while providing access to key functions.
## How to Get Started
Begin by ensuring you have an account. If not, sign up on [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io/sign-up) and follow the steps below:
1. Log in to your account at [https://cloud.projectdiscovery.io](https://cloud.projectdiscovery.io).
2. Click on the "**Create new template**" button to open up a fresh editor.
3. Write and modify your template. The editor includes tools like syntax highlighting, snippet suggestions, and other features to simplify the process.
4. After writing your template, input your testing target and click the "**Scan**" button to authenticate your template's accuracy.
# Recommended
Source: https://docs.projectdiscovery.io/cloud/editor/recommended
Learn more about using recommended templates with ProjectDiscovery
To share a template, click on the "Share" button to generate a link that can be sent to others.
## How to Share Public Templates
Public templates are designed for ease of sharing. You don't need to be authenticated to share them, meaning there's no need to log in. These templates are mapped with their Template ID, following a static URL pattern. For instance, a public template URL might resemble this: [https://cloud.projectdiscovery.io/public/CVE-2023-35078](https://cloud.projectdiscovery.io/public/CVE-2023-35078). In the given URL, `CVE-2023-35078` is the Template ID representing the template in the [nuclei-templates](https://github.com/projectdiscovery/nuclei-templates) project.
## How to Share User Templates
User templates, unlike public templates, require authentication for sharing. These templates are assigned a unique, UUID-based ID similar to YouTube's unlisted URLs for sharing purposes. This means anyone given the shared URL will be able to access the template.
## Revoking Access to Shared Templates
If at any point you want to limit the access to the shared template, it is as simple as changing the visibility of the template to private. After this change, the originally shared link will become inactive. However, you have the flexibility to share it again, which would generate a new unique ID.
Please remember, while sharing is easy, it's important to distribute the URL cautiously as the link allows full access to the shared template.
# Editor Keyboard Shortcuts
Source: https://docs.projectdiscovery.io/cloud/editor/shortcuts
Review keyboard shortcuts for Nuclei templates
The Template Editor is equipped with keyboard shortcuts to make it more efficient. You can use these shortcuts whether you're creating a new template or optimizing an existing one, enabling quicker actions without interfering with your workflow.
Here is a list of the actions, along with their corresponding shortcut keys and descriptions:
| **Action** | **Shortcut Key** | **Description** |
| --------------------- | ----------------------- | ------------------------------------------------------------ |
| Save Template | **CMD + S** | Saves the current template. |
| Duplicate Template | **CMD + D** | Creates a copy of a public template. |
| Execute Template | **CMD + SHIFT + SPACE** | Run a scan with the current template. |
| Share Template Link | **ALT + SHIFT + SPACE** | Generates a URL for sharing the current template. |
| Search Templates | **CMD + K** | Searches within your own templates. |
| Copy Template | **CMD + SHIFT + C** | Copies the selected template to your clipboard. |
| Show/Hide Side Bar | **CMD + B** | Toggles the visibility of the side bar. |
| Show/Hide Debug Panel | **CMD + SHIFT + M** | Toggles the visibility of the debug panel for extra insight. |
### Slack
ProjectDiscovery supports scan notifications through Slack. To enable Slack notifications provide a name for your Configuration, a webhook, and an optional username.
Choose from the list of **Events** (Scan Started, Scan Finished, Scan Failed) to specify what notifications are generated. All Events are selected by default
* Refer to Slack's [documentation on creating webhooks](https://api.slack.com/messaging/webhooks) for configuration details.
### MS Teams
ProjectDiscovery supports notifications through Microsoft Teams. To enable notifications, provide a name for your Configuration and a corresponding webhook.
Choose from the list of **Events** (Scan Started, Scan Finished, Scan Failed) to specify what notifications are generated.
* Refer to [Microsoft's documentation on creating webhooks](https://learn.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook?tabs=newteams%2Cdotnet) for configuration details.
### Email
ProjectDiscovery supports notifications via Email. To enable email notifications for completed scans simply add your recipient email addresses.
### Webhook
ProjectDiscovery supports custom webhook notifications, allowing you to post events to any HTTP endpoint that matches your infrastructure requirements.
To implement webhook notifications, provide:
* Configuration name
* Webhook URL
* Authentication parameters (if required)
Example endpoint format:
```
https://your-domain.com/api/security/alerts
```
## Ticketing Integrations
The integrations under Ticketing support ticketing functionality as part of scanning and include support for Jira, GitHub, GitLab, and Linear. Navigate to [Scans → Configurations → Ticketing](https://cloud.projectdiscovery.io/scans/configs?type=reporting) to configure your ticketing tools.
### Jira
ProjectDiscovery provides integration support for Jira to create new tickets when vulnerabilities are found.
Provide a name for the configuration, the Jira instance URL , the Account ID, the Email, and the associated API token.
Details on creating an API token are available [in the Jira documentation here.](https://support.atlassian.com/atlassian-account/docs/manage-api-tokens-for-your-atlassian-account/)
### GitHub
ProjectDiscovery provides integration support for GitHub to create new tickets when vulnerabilities are found.
Provide a name for the configuration, the Organization or username, Project name, Issue Assignee, Token, and Issue Label. The Issue Label determines when a ticket is created. (For example, if critical severity is selected, any issues with a critical severity will create a ticket.)
* The severity as label option adds a template result severity to any GitHub issues created.
* Deduplicate posts any new results as comments on existing issues instead of creating new issues for the same result.
Details on setting up access in GitHub [are available here.](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)
### GitLab
ProjectDiscovery provides integration support for GitLab to create new tickets when vulnerabilities are found.
Provide your GitLab username, Project name, Project Access Token and a GitLab Issue label. The Issue Label determines when a ticket is created.
(For example, if critical severity is selected, any issues with a critical severity will create a ticket.)
* The severity as label option adds a template result severity to any GitLab issues created.
* Deduplicate posts any new results as comments on existing issues instead of creating new issues for the same result.
Refer to GitLab's documentation for details on [configuring a Project Access token.](https://docs.gitlab.com/ee/user/project/settings/project_access_tokens.html#create-a-project-access-token)
### Linear
ProjectDiscovery integrates with Linear for automated issue tracking. The integration requires the following API parameters:
1. Linear API Key
2. Linear Team ID
3. Linear Open State ID
To retrieve these parameters:
1. **API Key Generation**:
* Path: Linear > Settings > API > Personal API keys
* Direct URL: linear.app/\[workspace]/settings/api
2. **Team ID Retrieval**:
```graphql theme={null}
query {
teams {
nodes {
id
name
}
}
}
```
1. **Open State ID Retrieval**:
```graphql theme={null}
query {
workflowStates {
nodes {
id
name
}
}
}
```
For detailed API documentation, refer to the [Linear API Documentation](https://developers.linear.app/docs/graphql/working-with-the-graphql-api).
## Cloud Asset Discovery
ProjectDiscovery supports integrations with all popular cloud providers to automatically sync externally facing hosts for vulnerability scanning. This comprehensive approach ensures all your cloud resources with external exposure are continuously monitored, complementing our external discovery capabilities. The result is complete visibility of your attack surface across cloud environments through a simple web interface.
### AWS (Amazon Web Services)
Supported AWS Services:
| Service | Description |
| :---------------------------------------------------- | :-------------------------------------------- |
| [EC2](https://aws.amazon.com/ec2/) | VM instances and their public IPs |
| [Route53](https://aws.amazon.com/route53/) | DNS hosted zones and records |
| [S3](https://aws.amazon.com/s3/) | Buckets (especially those public or with DNS) |
| [Cloudfront](https://aws.amazon.com/cloudfront/) | CDN distributions and their domains |
| [ECS](https://aws.amazon.com/ecs/) | Container cluster resources |
| [EKS](https://aws.amazon.com/eks/) | Kubernetes cluster endpoints |
| [ELB](https://aws.amazon.com/elasticloadbalancing/) | Load balancers (Classic ELB and ALB/NLB) |
| [ELBv2](https://aws.amazon.com/elasticloadbalancing/) | Load balancers (Classic ELB and ALB/NLB) |
| [Lambda](https://aws.amazon.com/lambda/) | Serverless function endpoints |
| [Lightsail](https://aws.amazon.com/lightsail/) | Lightsail instances (simplified VPS) |
| [Apigateway](https://aws.amazon.com/api-gateway/) | API endpoints deployed via Amazon API Gateway |
By covering these services, ProjectDiscovery can map out a broad range of AWS assets in your account. (Support for additional services may be added over time.)
**AWS Integration Methods**
ProjectDiscovery supports three methods to connect to AWS, each suited for different use cases and security preferences:
1. **Single AWS Account (Access Key & Secret)** – Direct credential-based authentication using an IAM User's Access Key ID and Secret Access Key to connect one AWS account. Choose this for quick setups or single-account monitoring.
2. **Multiple AWS Accounts (Assume Role)** – Use one set of credentials to assume roles in multiple accounts. This method is ideal for organizations with multiple AWS accounts (e.g. dev, prod, etc.). You provide one account's credentials and the common role name that exists in all target accounts.
3. **Cross-Account Role (Role ARN)** – Use a dedicated IAM role with an External ID for third-party access. This option lets you create a cross-account IAM role in your AWS account and grant ProjectDiscovery access via that role's Amazon Resource Name (ARN). This is the most secure integration method, as it follows AWS best practices for third-party account access.
**Prerequisites**
Before configuring the integration, make sure you have:
* **AWS Account** – Access to an AWS account where you can create IAM identities
* **Admin Access to IAM** – Permissions to create IAM users and roles
* **ProjectDiscovery Account** – Access to ProjectDiscovery's Cloud platform
* **Basic AWS IAM Knowledge** – Understanding of IAM users, access keys, and roles
#### 1. Single AWS Account (Access Key & Secret)
To connect a single AWS account directly:
1. **Create a Read-Only IAM User:** In the AWS IAM console, create a new IAM user for ProjectDiscovery integration. Assign **programmatic access** (which generates an Access Key ID and Secret Access Key).
2. **Attach Required Policies:** Grant the user read-only permissions to the AWS services you want to monitor. You can use AWS-managed policies like **AmazonEC2ReadOnlyAccess**, **AmazonS3ReadOnlyAccess**, etc. for each service (see the **Required Permissions** section below).
3. **Configure in ProjectDiscovery:**
* Select **Single AWS Account (Access Key & Secret)**
* Enter your **AWS Access Key ID** and **AWS Secret Access Key**
* Optionally provide a **Session Token** (only for temporary credentials)
* Give the integration a unique name
* Select the AWS services you want to monitor
*Tip:* Use an IAM user with minimal read-only permissions and rotate keys periodically for security.
#### 2. Multiple AWS Accounts (Assume Role)
For monitoring multiple AWS accounts from a central account:
1. **Choose a Primary Account:** Create an IAM user in one AWS account (the "primary") with programmatic access.
2. **Create an IAM Role in Each Target Account:** In each AWS account you want to monitor, create a role that:
* Uses the **same role name** across all accounts (e.g., "ProjectDiscoveryReadOnlyRole")
* Has a trust relationship allowing your primary account to assume it
* Has the required read-only permissions
3. **Configure in ProjectDiscovery:**
* Select **Multiple AWS Accounts (Assume Role)**
* Enter the primary account's **AWS Access Key ID** and **Secret Access Key**
* Specify the **Role Name to Assume** (the common role name)
* List all **AWS Account IDs** to monitor (one per line)
* Give the integration a unique name
* Select the AWS services you want to monitor
#### 3. Cross-Account Role (Role ARN)
The most secure method using ProjectDiscovery's service account:
1. **Create an IAM Role in Your AWS Account:**
* In your AWS console, go to IAM → Roles → Create Role
* Select "Another AWS account" as the trusted entity
* Enter ProjectDiscovery's ARN: `arn:aws:iam::034362060511:user/projectdiscovery`
* Enable "Require External ID" and enter the External ID shown in the ProjectDiscovery UI
* Attach the necessary read-only permissions
2. **Configure in ProjectDiscovery:**
* Select **Cross-Account Role (Role ARN)**
* Enter the **Role ARN** of the role you created
* Give the integration a unique name
* Select the AWS services you want to monitor
**Required Permissions**
ProjectDiscovery needs read-only access to your AWS assets. The following AWS-managed policies are recommended:
* EC2 - AmazonEC2ReadOnlyAccess
* Route53 - AmazonRoute53ReadOnlyAccess
* S3 - AmazonS3ReadOnlyAccess
* Lambda - AWSLambda\_ReadOnlyAccess
* ELB - ElasticLoadBalancingReadOnly
* Cloudfront - CloudFrontReadOnlyAccess
Alternatively, you can use this custom policy for minimal permissions:
```json theme={null}
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "RequiredReadPermissions",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"route53:ListHostedZones",
"route53:ListResourceRecordSets",
"s3:ListAllMyBuckets",
"lambda:ListFunctions",
"elasticloadbalancing:DescribeLoadBalancers",
"elasticloadbalancing:DescribeTargetGroups",
"elasticloadbalancing:DescribeTargetHealth",
"cloudfront:ListDistributions",
"ecs:ListClusters",
"ecs:ListServices",
"ecs:ListTasks",
"ecs:DescribeTasks",
"ecs:DescribeContainerInstances",
"eks:ListClusters",
"eks:DescribeCluster",
"apigateway:GET",
"lightsail:GetInstances",
"lightsail:GetRegions"
],
"Resource": "*"
}
]
}
```
**Verifying the Integration**
After configuring the integration, it's important to verify that ProjectDiscovery is successfully connected and enumerating your AWS assets:
* **Check Asset Discovery:** In the ProjectDiscovery platform, navigate to the cloud assets or inventory section. After a successful integration, you should start seeing resources from your AWS account(s) listed (for example, EC2 instance IDs, S3 bucket names, etc., corresponding to the integrated accounts). It may take a short while for the initial discovery to complete. If you see those assets, the integration is working.
* **Test with a Known Resource:** As a quick test, pick a known resource (like a specific EC2 instance or S3 bucket in your AWS account) and search for it in ProjectDiscovery's asset inventory. If it appears, the connection is functioning and pulling data.
* **Troubleshooting Errors:** If the integration fails or some assets are missing, consider these common issues:
* *Incorrect Credentials:* Double-check that the Access Key and Secret (if used) were entered correctly and correspond to an active IAM user. If you recently created the user, ensure you copied the keys exactly (no extra spaces or missing characters).
* *Insufficient Permissions:* If certain services aren't showing up, the IAM policy might be missing permissions. For example, if S3 buckets aren't listed, confirm that the policy includes `s3:ListAllMyBuckets`. Refer back to the Required Permissions and make sure all relevant actions are allowed. You can also use AWS IAM Policy Simulator or CloudTrail logs to see if any **AccessDenied** errors occur when ProjectDiscovery calls AWS APIs.
* *Assume Role Failures:* In multi-account or cross-account setups, a common issue is a misconfigured trust relationship. If ProjectDiscovery cannot assume a role, you might see an error in the UI or logs like "AccessDenied: Not authorized to perform sts:AssumeRole". In that case, check the following:
* The trust policy of the IAM role (in target account) trusts the correct principal (either your primary account's IAM user/role ARN for multi-account, or ProjectDiscovery's external account ID for cross-account) and the External ID if applicable.
* The role name or ARN in the ProjectDiscovery config exactly matches the one in AWS (spelling/case must match).
* The primary credentials (for multi-account) have permission to call `AssumeRole`.
* *External ID Mismatch:* For cross-account roles, if the external ID in ProjectDiscovery and the one in the IAM role's trust policy do not match, AWS will deny the assume request. Ensure you didn't accidentally copy the wrong value or include extra spaces. It must be exact.
* **AWS CloudTrail Logs:** As an additional verification, you can check AWS CloudTrail in your account. When ProjectDiscovery connects, you should see events like `DescribeInstances`, `ListBuckets`, etc., being called by the IAM user or assumed role. For cross-account roles, you will see an `AssumeRole` event from ProjectDiscovery's AWS account ID, and subsequent calls under the assumed role's identity. This audit trail can confirm that the integration is working as intended and using only allowed actions.
If all checks out, ProjectDiscovery is now actively monitoring your AWS environment. New resources launched in AWS should be detected on the next scan cycle, and any changes to your cloud footprint will be reflected in the platform. Make sure to regularly review the integration and update the IAM permissions if you start using new AWS services.
**References:**
1. [https://docs.aws.amazon.com/IAM/latest/UserGuide/reference\_policies\_examples\_iam\_read-only-console.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_iam_read-only-console.html)
2. [https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_credentials\_access-keys.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
3. [https://docs.aws.amazon.com/IAM/latest/UserGuide/id\_credentials\_temp\_request.html](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_request.html)
4. [https://docs.aws.amazon.com/sdkref/latest/guide/feature-assume-role-credentials.html](https://docs.aws.amazon.com/sdkref/latest/guide/feature-assume-role-credentials.html)
5. [https://docs.logrhythm.com/OCbeats/docs/aws-cross-account-access-using-sts-assume-role](https://docs.logrhythm.com/OCbeats/docs/aws-cross-account-access-using-sts-assume-role)
### Google Cloud Platform (GCP)
Supported GCP Services:
* [Cloud DNS](https://cloud.google.com/dns)
* [Kubernetes Engine](https://cloud.google.com/kubernetes-engine)
* [Compute Engine](https://cloud.google.com/products/compute)
* [Bucket](https://cloud.google.com/storage)
* [Cloud Functions](https://cloud.google.com/functions)
* [Cloud Run](https://cloud.google.com/run)
**GCP Integration Methods:**
1. **Organization-Level Asset API** (Recommended for Enterprises)
* Uses Google Cloud's **Asset Inventory API** for comprehensive organization-wide discovery
* Discovers assets across entire GCP organization with a single configuration
* Requires organization-level permissions: `roles/cloudasset.viewer` and `roles/resourcemanager.viewer`
* Ideal for large organizations with multiple projects
2. **Individual Service APIs** (Default)
* Uses individual GCP service APIs for project-specific discovery
* Faster execution with detailed resource metadata
* Requires project-level permissions for each service
* Ideal for focused, single-project discovery
### Multi-Organization Support
ProjectDiscovery supports monitoring **multiple GCP organizations simultaneously**. Simply configure multiple integrations with different organization IDs to get consolidated asset discovery across all your GCP environments (e.g., production, staging, development organizations).
### Finding Your Organization ID
1. **Via Google Cloud Console:**
* Go to the [Google Cloud Console](https://console.cloud.google.com/)
* In the top navigation, click on the **project selector** (next to "Google Cloud Platform")
* Click **All** tab to view all resources
* Look for your organization name - the **Organization ID** is displayed next to it
* Alternatively, go to [IAM & Admin > Settings](https://console.cloud.google.com/iam-admin/settings) - your Organization ID will be shown at the top
2. **Via gcloud CLI:**
```bash theme={null}
# List all organizations you have access to
gcloud organizations list
# Get current organization (if configured)
gcloud config get-value project
gcloud projects describe $(gcloud config get-value project) --format="value(parent.id)"
```
3. **Via Organization Policies Page:**
* Navigate to [Organization Policies](https://console.cloud.google.com/iam-admin/orgpolicies) in the Console
* Your Organization ID will be displayed in the URL and page header
### Checking Your Permissions
Before setting up the integration, verify you have the necessary permissions:
1. **For Organization-Level Integration:**
```bash theme={null}
# Check if you can list organization assets
gcloud organizations list
# Check if you have the required roles
gcloud organizations get-iam-policy YOUR_ORG_ID --flatten="bindings[].members" --format="table(bindings.role)" --filter="bindings.members:user:YOUR_EMAIL"
```
2. **For Project-Level Integration:**
```bash theme={null}
# Check project permissions
gcloud projects get-iam-policy YOUR_PROJECT_ID --flatten="bindings[].members" --format="table(bindings.role)" --filter="bindings.members:user:YOUR_EMAIL"
```
## Step-by-Step Setup Instructions
### Option 1: Organization-Level Asset API Setup
1. **Verify Organization Access:**
* Ensure you have `roles/cloudasset.viewer` and `roles/resourcemanager.viewer` at the organization level
* You can check this in [IAM & Admin > IAM](https://console.cloud.google.com/iam-admin/iam) by switching to your organization scope
2. **Create Service Account:**
* Navigate to any project within your organization
* Go to [IAM & Admin > Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts)
* Click **Create Service Account**
* Name it something like `projectdiscovery-org-scanner`
* Click **Create and Continue**
3. **Grant Organization-Level Permissions:**
* Go to [IAM & Admin > IAM](https://console.cloud.google.com/iam-admin/iam)
* Switch to your **Organization** scope (not project)
* Click **Grant Access**
* Enter your service account email: `projectdiscovery-org-scanner@YOUR_PROJECT_ID.iam.gserviceaccount.com`
* Assign these roles:
* `Cloud Asset Viewer`
* `Organization Viewer`
* Click **Save**
4. **Generate Service Account Key:**
* Return to [Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts)
* Click on your service account
* Go to **Keys** tab
* Click **Add Key > Create New Key**
* Choose **JSON** format
* Download and securely store the key file
### Option 2: Individual Service APIs Setup
1. **Select Target Project:**
* Choose the specific project you want to monitor
* Note the **Project ID** (not the display name)
2. **Create Service Account:**
* Navigate to [IAM & Admin > Service Accounts](https://console.cloud.google.com/iam-admin/serviceaccounts) in your target project
* Click **Create Service Account**
* Name it something like `projectdiscovery-scanner`
* Click **Create and Continue**
3. **Grant Project-Level Permissions:**
* On the same page, assign these roles:
* `Compute Viewer`
* `DNS Reader`
* `Storage Object Viewer`
* `Cloud Run Viewer`
* `Cloud Functions Viewer`
* `Kubernetes Engine Viewer`
* `Browser` (for basic project access)
* Click **Continue** and then **Done**
4. **Generate Service Account Key:**
* Click on your service account
* Go to **Keys** tab
* Click **Add Key > Create New Key**
* Choose **JSON** format
* Download and securely store the key file
References:
1. [https://cloud.google.com/iam/docs/service-account-overview](https://cloud.google.com/iam/docs/service-account-overview)
2. [https://cloud.google.com/iam/docs/keys-create-delete#creating](https://cloud.google.com/iam/docs/keys-create-delete#creating)
3. [https://cloud.google.com/asset-inventory/docs/overview](https://cloud.google.com/asset-inventory/docs/overview)
### Azure
Supported Azure Services:
* Virtual Machines
* Public IP Addresses
* Traffic Manager
* Storage Accounts
* Azure Kubernetes Service (AKS)
* Content Delivery Network (CDN)
* DNS Zones and Records
* Application Gateway & Load Balancer
* Container Instances
* App Service & Web Apps
* Azure Functions
* API Management
* Front Door
* Container Apps
* Static Web Apps
**Azure Integration Method:**
ProjectDiscovery Cloud Platform uses Microsoft's modern **Track 2 SDK** for Azure integration, providing enhanced security, performance, and support for the latest Azure services. The integration supports **6 authentication methods** to accommodate various cloud deployment scenarios while maintaining 100% backward compatibility with existing configurations.
### Quick Setup Options
**For most users (Service Principal method):**
Create an App Registration in Azure Active Directory with the following required credentials:
* Azure Tenant ID
* Azure Subscription ID
* Azure Client ID
* Azure Client Secret
Below are the steps to get the above credentials:
1. **Create App Registration:**
* Go to **Azure Active Directory > App registrations > + New registration**.
* From the app's **Overview** page, collect the **Application (client) ID** and **Directory (tenant) ID**.
2. **Generate Client Secret:**
* In the app, navigate to **Certificates & secrets > + New client secret**.
* **CRITICAL:** Copy the secret **Value** immediately, as it is shown only once.
3. **Assign Permissions:**
* Go to your **Subscription > Access control (IAM)**.
* Prefer a least-privilege custom role instead of the broad built-in Reader.
* Create a custom role (for example, "CloudList Reader") with minimal actions and then assign it to the App Registration you created. Example definition:
```json theme={null}
{
"properties": {
"roleName": "CloudList Reader",
"description": "Minimal permissions for CloudList to discover Azure resources including VMs, Storage, AKS, CDN, DNS, and more",
"assignableScopes": [
"/subscriptions/
Supported Alibaba Cloud Services:
* ECS Instances
**Alibaba Integration Method**
This guide details the secure, best-practice method for connecting to Alibaba Cloud using a dedicated RAM user with read-only permissions.
1. **Create a RAM User for API Access:**
* Navigate to the **RAM (Resource Access Management) console**. [Ref](https://ram.console.aliyun.com/manage/ak)
* From the left menu, go to **Identities > Users** and click **Create User**.
* Enter a **Logon Name** (e.g., `projectdiscovery-readonly`).
* For **Access Mode**, select **OpenAPI Access** and click **OK**. This is for programmatic access, not console login.
2. **Securely Store the Access Key**: An AccessKey pair is generated immediately after the user is created. This is the only time the secret is shown.
Supported Kubernetes Services:
* Services
* Ingresses
* Cross-cloud cluster discovery
Supported Cloudflare Services:
* DNS and CDN assets
**Cloudflare Integration Methods:**
You can integrate Cloudflare into ProjectDiscovery via one of two methods:
1. **Global API Key**
* Go to Cloudflare Dashboard.
* Under "API Keys", locate the **Global API Key** and click **View.**
* Authenticate and copy the key.
* Now enter the Cloudflare account email and Global API Key copied in above step into ProjectDiscovery Cloud Platform.
* Give a unique Integration name and click **Verify**.
2. **API Token**
* From the [Cloudflare dashboard ↗](https://dash.cloudflare.com/profile/api-tokens/), go to **My Profile** > **API Tokens** for user tokens. For Account Tokens, go to **Manage Account** > **API Tokens**.
* Select **Create Token**.
* Give required permission (follow reference 2 for details) and create token. Copy the Token
* Now enter API Token in ProjectDiscovery Cloud Platform.
* Give a unique Integration name and click **Verify**.
References:
1. [https://developers.cloudflare.com/api/keys](https://developers.cloudflare.com/api/keys)
2. [https://developers.cloudflare.com/fundamentals/api/get-started/create-token/](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/)
### Fastly
**Fastly Integration Method**
* Go to Fastly [account settings](https://manage.fastly.com/account/personal).
* Under **API**, click **Create API token** if you don’t already have one.
* Copy the API Key.
* Now enter API Key in ProjectDiscovery Cloud Platform.
* Give a unique Integration name and click **Verify**.
**DigitalOcean Integration Method**
* Go to DigitalOcean [API Settings](https://cloud.digitalocean.com/account/api/tokens).
* Click **Generate New Token**
* Provide a name and enable **read-only** access scope
* Copy the token
* Now enter token in ProjectDiscovery Cloud Platform.
* Give a unique Integration name and click **Verify**.
Supported Services:
* Droplets and managed services
References:
1. [https://docs.digitalocean.com/reference/api/create-personal-access-token/](https://docs.digitalocean.com/reference/api/create-personal-access-token/)
2. [https://docs.digitalocean.com/reference/api/](https://docs.digitalocean.com/reference/api/)
# Introducing ProjectDiscovery
Source: https://docs.projectdiscovery.io/cloud/introduction
4. **Launch Your Scan**:
* Click "Create scan" to start scanning your internal targets
Your scan will now execute against internal targets through the secure TunnelX connection, with results appearing in your ProjectDiscovery dashboard just like external scans.
3. Use the `nuclei -auth` command, and enter your API key when prompted.
### Configure Team (Optional)
If you want to upload the scan results to a team workspace instead of your personal workspace, you can configure the Team ID using either method:
* **Obtain Team ID:**
* Navigate to [https://cloud.projectdiscovery.io/settings/team](https://cloud.projectdiscovery.io/settings/team)
* Copy the Team ID from the top right section
* **CLI Option:**
```bash
nuclei -tid XXXXXX -cloud-upload
```
* **ENV Variable:**
```bash
export PDCP_TEAM_ID=XXXXX
```
2. Run your scan with the upload flag:
```bash
# Single target
nuclei -u http://internal-target -cloud-upload
# Multiple targets
nuclei -l internal-hosts.txt -cloud-upload
# With specific templates
nuclei -u http://internal-target -t misconfiguration/ -cloud-upload
```
All Enterprise accounts are automatically enrolled in Real-time Autoscan. To check if Real-time Autoscan is enabled for your account:
1. Visit the [ProjectDiscovery Cloud Dashboard](https://cloud.projectdiscovery.io/)
2. Navigate to the Real-Time Scanning section directly from the dashboard home
3. Check if "Real-time Autoscan" is toggled on
### Custom Asset Selection
By default, every asset added to ProjectDiscovery will be automatically scanned when new Nuclei templates are released.
Real-time Autoscan can also be configured to scan a subset of your assets by taking the following steps:
1. Visit the [ProjectDiscovery Cloud Dashboard](https://cloud.projectdiscovery.io/)
2. Navigate to the Real-Time Scanning section directly from the dashboard home
3. Click on the gear icon next to the toggle
4. Select **Custom Assets**
5. Select the asset groups you wish to include in Real-time Autoscan
6. Click on **Update**
## Reviewing Scan Results
Real-time Autoscan results are grouped as a separate scan titled **"Early Templates Autoscan"** under the **Scans** tab. This scan updates automatically whenever a new Nuclei template is merged, scanning your assets with the latest template.
Detected vulnerabilities will appear as open results within the scan. These results will remain open even if the scan is later updated with a newly merged Nuclei template.
To view the most recent template used in the scan:
1. Click the three dots menu to the right of the scan.
2. Select **Update**
3. Click on the tab **Set templates**.
4. Expand the folder labeled **"Early Templates"**.
## Alerting
By default, only newly detected vulnerabilities will generate email or message alert. However, on occasion, we may merge a trending exploit that warrants a notification even if no vulnerable hosts are detected. This message can be shared internally to proactively communicate a strong security posture with relevant stakeholders and leadership personnel.
The retest process will:
* **Target specific vulnerabilities**: Only tests the vulnerabilities you've selected
* **Provide immediate results**: Runs on-demand when you need verification
* **Update status automatically**: If the vulnerability is fixed, the status updates to "Fixed"; otherwise, it reverts to its original status
* **Require minimal resources**: Lightweight operation focused on specific findings
Manual retest is particularly valuable when you need immediate confirmation after applying fixes or want to validate specific vulnerabilities before important deadlines.
## Auto Retest
The platform includes an **automated retesting feature** that runs daily to continuously monitor the status of your vulnerabilities. This feature is **enabled by default** and provides ongoing validation of your security posture without manual intervention.
### How Auto Retest Works
Auto retest automatically examines all open and fixed vulnerabilities in your system on a daily basis to:
* **Maintain accuracy**: Ensures that vulnerability statuses reflect the current state of your assets
* **Detect regressions**: Identifies when previously fixed vulnerabilities have reappeared
* **Provide visibility**: Gives you clear insight into which vulnerabilities are actively exploitable
Since this process runs automatically in the background, you may not be aware it's happening - but it's continuously working to keep your vulnerability data current and reliable.
### Manual Retest vs Auto Retest
The platform offers two distinct retesting approaches, each serving different needs:
| Feature | Manual Retest | Auto Retest |
| ------------------ | ---------------------------------------------- | -------------------------------------------------- |
| **Scope** | Retests specific selected vulnerabilities | Retests all existing vulnerabilities in the system |
| **Purpose** | Ad-hoc validation of specific findings | Continuous validation and regression detection |
| **Trigger** | User-initiated, on-demand | Daily, automatic |
| **Control** | Full user control over what gets tested | Automatic, system-managed |
| **Resource usage** | Minimal, highly targeted | Lightweight, comprehensive coverage |
| **Use case** | Verify remediation of specific vulnerabilities | Maintain overall security posture |
| **Timing** | Immediate, when needed | Background, daily schedule |
### When to Use Each Approach
**Choose Manual Retest when:**
* You've just fixed a specific vulnerability and want immediate confirmation
* You need to validate remediation before a security review or audit
* You want to test a small subset of critical vulnerabilities quickly
* You're working on specific issues and need real-time feedback
**Auto Retest is ideal for:**
* Maintaining continuous security posture monitoring
* Detecting regressions without manual effort
* Keeping vulnerability statuses accurate across your entire portfolio
* Background monitoring that doesn't require user intervention
Tap into the Future of Security Workflows
Learn and read all about our open-source technologies, cloud platform, and APIs
Get started
{/* First Set of Cards */}For Organizations
{/* Second Set of Cards */}Get support, share stories, and engage with the community.
Join ServerWe're here to help you! Explore the documentation or join the conversation.
View Help Section Join Discord
* Use the `httpx -auth` command, and enter your API key when prompted.
#### Configure Team (Optional)
If you want to upload the asset results to a team workspace instead of your personal workspace, you can configure the Team ID. You can use either the CLI option or the environment variable, depending on your preference.
* **Obtain Team ID:**
* To obtain your Team ID, navigate to [https://cloud.projectdiscovery.io/settings/team](https://cloud.projectdiscovery.io/settings/team) and copy the Team ID from the top right section.

* **CLI Option:**
* Use the `-tid` or `-team-id` option to specify the team ID.
* Example: `nuclei -tid XXXXXX -dashboard`
* **ENV Variable:**
* Set the `PDCP_TEAM_ID` environment variable to your team ID.
* Example: `export PDCP_TEAM_ID=XXXXX`
Either of these options is sufficient to configure the Team ID.
#### Run httpx with UI Dashboard
To run `httpx` and upload the results to the UI Dashboard:
```console
$ chaos -d hackerone.com | httpx -dashboard
__ __ __ _ __
/ /_ / /_/ /_____ | |/ /
/ __ \/ __/ __/ __ \| /
/ / / / /_/ /_/ /_/ / |
/_/ /_/\__/\__/ .___/_/|_|
/_/
projectdiscovery.io
[INF] Current httpx version v1.6.6 (latest)
[INF] To view results on UI dashboard, visit https://cloud.projectdiscovery.io/assets upon completion.
http://a.ns.hackerone.com
https://www.hackerone.com
http://b.ns.hackerone.com
https://api.hackerone.com
https://mta-sts.forwarding.hackerone.com
https://docs.hackerone.com
https://support.hackerone.com
https://mta-sts.hackerone.com
https://gslink.hackerone.com
[INF] Found 10 results, View found results in dashboard : https://cloud.projectdiscovery.io/assets/cqd56lebh6us73bi22pg
```

#### Uploading to an Existing Asset Group
To upload new assets to an existing asset group:
```console
$ chaos -d hackerone.com | httpx -dashboard -aid existing-asset-id
```
#### Setting an Asset Group Name
To set a custom asset group name:
```console
$ chaos -d hackerone.com | httpx -dashboard -aname "Custom Asset Group"
```
### Additional upload options
* `-pd, -dashboard`: Enable uploading of `httpx` results to the ProjectDiscovery Cloud (PDCP) UI Dashboard.
* `-aid, -asset-id string`: Upload new assets to an existing asset ID (optional).
* `-aname, -asset-name string`: Set the asset group name (optional).
* `-pdu, -dashboard-upload string`: Upload `httpx` output file (jsonl) to the ProjectDiscovery Cloud (PDCP) UI Dashboard.
### Environment Variables
* `export ENABLE_CLOUD_UPLOAD=true`: Enable dashboard upload by default.
* `export DISABLE_CLOUD_UPLOAD_WARN=true`: Disable dashboard warning.
* `export PDCP_TEAM_ID=XXXXX`: Set the team ID for the ProjectDiscovery Cloud Platform.
## Expanded Examples
### Using httpx as a library
httpx can be used as a library by creating an instance of the Option struct and populating it with the same options that would be specified via CLI.
Once validated, the struct should be passed to a runner instance (to be closed at the end of the program) and the RunEnumeration method should be called.
* A basic example of how to use httpx as a library is available in the [GitHub examples](https://github.com/projectdiscovery/httpx/tree/main/examples) folder.
### Using httpx screenshot
Httpx includes support for taking a screenshot with `-screenshot` that gives users the ability to take screenshots of target URLs, pages, or endpoints along with the rendered DOM.
This functionality enables a comprehensive view of the target's visual content.
Rendered DOM body is also included in json line output when `-screenshot` option is used with `-json` option.
To use this feature, add the `-screenshot` flag to the `httpx` command.
`httpx -screenshot -u https://example.com`
This domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.
\n \nThis domain is for use in illustrative examples in documents. You may use this\n domain in literature without prior coordination or asking for permission.
\n \n
2. Use the `nuclei -auth` command, enter your API key when prompted.
3. To perform a scan and upload the results straight to the cloud, use the `-cloud-upload` option while running a nuclei scan.
An example command might look like this:
```bash
nuclei -target http://honey.scanme.sh -cloud-upload
```
And the output would be like this:
```console
__ _
____ __ _______/ /__ (_)
/ __ \/ / / / ___/ / _ \/ /
/ / / / /_/ / /__/ / __/ /
/_/ /_/\__,_/\___/_/\___/_/ v3.1.0
projectdiscovery.io
[INF] Current nuclei version: v3.1.0 (latest)
[INF] Current nuclei-templates version: v9.6.9 (latest)
[INF] To view results on cloud dashboard, visit https://cloud.projectdiscovery.io/scans upon scan completion.
[INF] New templates added in latest release: 73
[INF] Templates loaded for current scan: 71
[INF] Executing 71 signed templates from projectdiscovery/nuclei-templates
[INF] Targets loaded for current scan: 1
[INF] Using Interactsh Server: oast.live
[CVE-2017-9506] [http] [medium] http://honey.scanme.sh/plugins/servlet/oauth/users/icon-uri?consumerUri=http://clk37fcdiuf176s376hgjzo3xsoq5bdad.oast.live
[CVE-2019-9978] [http] [medium] http://honey.scanme.sh/wp-admin/admin-post.php?swp_debug=load_options&swp_url=http://clk37fcdiuf176s376hgyk9ppdqe9a83z.oast.live
[CVE-2019-8451] [http] [medium] http://honey.scanme.sh/plugins/servlet/gadgets/makeRequest
[CVE-2015-8813] [http] [high] http://honey.scanme.sh/Umbraco/feedproxy.aspx?url=http://clk37fcdiuf176s376hgj885caqoc713k.oast.live
[CVE-2020-24148] [http] [critical] http://honey.scanme.sh/wp-admin/admin-ajax.php?action=moove_read_xml
[CVE-2020-5775] [http] [medium] http://honey.scanme.sh/external_content/retrieve/oembed?endpoint=http://clk37fcdiuf176s376hgyyxa48ih7jep5.oast.live&url=foo
[CVE-2020-7796] [http] [critical] http://honey.scanme.sh/zimlet/com_zimbra_webex/httpPost.jsp?companyId=http://clk37fcdiuf176s376hgi9b8sd33se5sr.oast.live%23
[CVE-2017-18638] [http] [high] http://honey.scanme.sh/composer/send_email?to=hVsp@XOvw&url=http://clk37fcdiuf176s376hgyf8y81i9oju3e.oast.live
[CVE-2018-15517] [http] [high] http://honey.scanme.sh/index.php/System/MailConnect/host/clk37fcdiuf176s376hgi5j3fsht3dchj.oast.live/port/80/secure/
[CVE-2021-45967] [http] [critical] http://honey.scanme.sh/services/pluginscript/..;/..;/..;/getFavicon?host=clk37fcdiuf176s376hgh1y3xjzb3yjpy.oast.live
[CVE-2021-26855] [http] [critical] http://honey.scanme.sh/owa/auth/x.js
[INF] Scan results uploaded! View them at https://cloud.projectdiscovery.io/scans/clk37krsr14s73afc3ag
```
After the scan, a URL will be displayed on the command line interface. Visit this URL to check your results on the Cloud Dashboard.
### Advanced Integration Options
**Setting API key via environment variable**
Avoid entering your API key via interactive prompt by setting it via environment variable.
```sh
export PDCP_API_KEY=XXXX-XXXX
```
**Enabling result upload by default**
If you want all your scans to automatically upload results to the cloud, enable the `ENABLE_CLOUD_UPLOAD` environment variable.
```sh
export ENABLE_CLOUD_UPLOAD=true
```
**Disabling cloud upload warnings**
To suppress warnings about result uploads, disable the `DISABLE_CLOUD_UPLOAD_WRN` environment variable.
```sh
export DISABLE_CLOUD_UPLOAD_WRN=true
```
Your configured PDCP API key stored in `$HOME/.pdcp/credentials.yaml`
code property strictly requires a function reference. Direct expressions or values are invalid and will not work. Always use a function.
**Incorrect:**
```yaml
action: script
args:
code: alert(document.domain) # ❌ This is NOT a function reference
```
**Correct:**
```yaml
action: script
args:
code: () => alert(document.domain) # ✅ This is a function reference
```