Getting Started

Getting started with Sift Security is a simple three step process:

  • Select a deployment option and start your Sift Security instance
  • Configure your API scans and start seeing configuration alerts in minutes
  • Configure your CloudTrail logs and start seeing threat detection in hours

This page walks through the details of the setup process.


The setup process takes 15 - 60 minutes to complete.

Deployment Options

To get started with Sift Security, choose one of the available deployment models and follow the instructions for ingesting your data.

CloudHunter AMI
The AMI is a standalone Sift Security installation that you install in your AWS account.
Cloud Hunter SaaS
The SaaS solution is hosted and managed by Sift Security. You get a dedicated instance just for your organization, and Sift Security handles all maintenance and upgrades.

Installation and Initial Login

For AMI-based installations of Sift Security Hunter, you may launch an instance from our AMI in the AWS Marketplace, or we may have shared an AMI with you privately. If we have shared an AMI privately with you, this means that you must look for the AMI in the AWS account and region you specified to us. If you are using the console to launch the instance, it will show you the AMIs you own by default. Using the drop-down list to the left of the search bar within the AMI window, please select “Private Images”. This selection will show you images you don’t own that have been shared with you, and you should be able to search for the AMI ID we provided to you.

Once you have launched an instance in your account, simply navigate to https://<InstanceElasticIP> using the Chrome Browser, where <InstanceElasticIP> is the IP address of the Sift Security installation. Login with the Username “admin” and the instance ID as the password (e.g. “i-1234567890”).

For SaaS installations, the URL and explicit login credentials will be provided to you.

Ingesting Data from AWS

At a minimum, CloudHunter requires API access to your AWS account for configuration scanning and access to S3 buckets containing your CloudTrail data. Other data sources that can be ingested from S3 buckets include:

  • VPC Flow (Network traffic)
  • ELBv2 (Application Load Balancers)
  • Authpriv logs (Endpoint authentication)
  • GuardDuty Findings

Supported AWS Data Sources

The following table provide a detailed breakdown of all of the AWS services we support. API supports means that we use API scans to evaluate whether the configuration of the resources meets best practices and compliance standards. Log support means that we ingest the logs from that service, providing alerts around potentially risky or malicious usage.


✔ - Fully Supported
Index, Graph, Alerts, & Dashboards
⚬ - Partially Supported
Index or Alerts only
· - Coming Soon
CloudTrail logs only, full support on the roadmap
AWS Data Source Support
Category Service Logs API
Application Services Simple Email Service (SES) ·
Application Services Simple Notification Service
Application Services Simple Queuing Service
Application Services STS  
Big Data & Analytics Elastic MapReduce (EMR) ·
Big Data & Analytics Kinesis    
Big Data & Analytics Machine Learning    
Compute Elastic Beanstalk (EBS)    
Compute Elastic Cloud Compute (EC2)
Compute Lambda
Compute Elastic Compute Service (ECS) ·
Compute ECS for Kubernetes (EKS) · ·
Database DynamoDB ·
Database Relational Database Service (RDS)
Database Redshift ·
Database SimpleDB    
Internet of Things (IOT) IOT · ·
Management AWS Config · ·
Management CloudFormation ·
Management CloudTrail
Management CloudWatch ·
Management Cognito · ·
Management Directory Service · ·
Management Identity Access Management (IAM)
Management Organizations · ·
Management Signin  
Management API Gateway    
Network Application Load Balancer (ALB)
Network CloudFront · ·
Network DirectConnect ·
Network Elastic Load Balancer (ELB)
Network ELBv2
Network Route53 ·
Network Virtual Private Cloud
Network VPC Flow  
Security Certificate Manager · ·
Security CloudHSM    
Security GuardDuty ·
Security Inspector  
Security Key Management Service (KMS) ·
Security Macie · ·
Security Web Application Firewall (WAF) · ·
Storage Elastic Block Store    
Storage Elastic File System (EFS) ·
Storage Elasticache ·
Storage Glacier · ·
Storage S3

AWS Setup Instructions using CloudFormation Templates

If you would like to streamline the setup process, you can use our CloudFormation templates to help with the AWS setup.

Setting up API access:

  1. Authenticate to AWS with a user or role that has enough permissions to create IAM entities and CloudFormation stacks.
  2. Create a new CloudFormation stack by uploading the nsk-csa.yaml file.
  3. The CloudFormation stack will prompt you for a name, which you can select.
  4. You will also be prompted for the AWS account number to trust and an External ID.
  5. Once the CloudFormation stack has been created successfully, make a note of the Outputs. You will use these to configure CloudHunter.

Setting up CloudTrail Log access:

  1. Authenticate to AWS with a user or role that has enough permissions to create IAM entities and CloudFormation stacks.
  2. Create a new CloudFormation stack by uploading the nsk-cloudtrail.yaml file.
  3. The CloudFormation stack will prompt you for a name, which you can select.
  4. You will also be prompted for the S3 bucket name that contains CloudTrail logs, and the Role name to gain access to it.
  5. If you already completed our API access setup, and don’t want to create a new role, you use the same role.
  6. Once the CloudFormation stack has been created successfully, make a note of the Outputs. You will use these to configure CloudHunter.

Once the stacks have been created successfully, refer to the rest of the procedures below for how to configure CloudHunter.

AWS Setup Instructions for API Sources

To configure API based sources, begin by creating a cross-account role in each of your AWS accounts. The cross-account role should have the AWS managed IAM policy called SecurityAudit, as well as the following policy. When creating the cross-account role, please udpate the trust policy for the role to trust this account number: 727533209477. You may select an External ID that you would like us to use when assuming the role. If you need more information about cross-account roles, please check the documentation here:

You will need to note the Role ARN and External ID to configure CloudHunter. CloudHunter will pull in all the data it has access to, but only requires read access in your environment. The following policy is required to get full support. If you wish to omit any services from CloudHunter, omit them from the policy.

    "Statement": [
            "Action": [
            "Effect": "Allow",
            "Resource": "*"
    "Version": "2012-10-17"

Next, connect CloudHunter to each of your accounts:

  1. Click the gear icon on the top right.
  2. Select “Add Data Source” and “AWS API”
  3. Give each of your accounts a distinct input name, select “Role” as the S3 auth method, and enter your Role ARN and External ID in the text boxes.
  4. “Region” should be left unchanged, unless the account you are configuring is in GovCloud. If the account is in GovCloud, you must select “GovCloud (US)”

After you have setup all of your AWS accounts, click the “API Scan” button. This triggers an API scan of all of your accounts. This typically takes a few minutes, but can take longer if you have a lot of accounts. Navigate to the “Risks” -> “Alerts” tab to see the configuration alerts generated from your accounts.

For a full list of services supported through the API, see Supported AWS Data Sources.

AWS Setup Instructions for S3 Sources

Once you have verified that your API sources have been correctly configured, you can configure CloudHunter to ingest your CloudTrail logs (and any other logs you have stored in S3 buckets).

For whichever account(s) your CloudTrail logs reside, you must configure your cross-account role to access the appropriate S3 buckets. Substitute [YOUR-BUCKET-NAME] in the policy below with the name of your S3 bucket. If you are using GovCloud the ARN string should begin with arn:aws-us-gov:s3::: instead of arn:aws:s3:::.

    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": [
            "Principal": {
                "AWS": "arn:aws:iam::account-id:root"
  1. Click the gear icon on the top right.
  2. Select “Add Data Source” and “Logs from S3”
  3. Give your CloudTrail source a distinct name.
  4. Select “CloudTrail” as the “Log Type”
  5. Select “S3” as the “Input Source”
  6. Select “Commercial” as the “Account Type” if you are using a standard AWS account. If you are using GovCloud, select “GovCloud”
  7. Select “Cross-account Role” and provide your Role ARN and External ID.
  8. Please specify a “Start Date” for the records to ingest. CloudHunter will not ingest any data before this date.

This process must be repeated for each bucket you wish to ingest into Sift Security. Ingestion starts immediately when the configuration is saved and may take a few hours, depending on how much data is available. To verify that

Exporting GuardDuty Findings to S3

In testing the export of GuardDuty findings, we have found that there is more complete data available by exporting the findings to S3, instead of ingesting the findings directly via API. To simplify the setup, Sift will provide you with a CloudFormation template to complete the deployment. Please contact to obtain the CloudFormation template, and then follow the steps below:

  1. Setup S3 bucket to hold the findings. You can follow the instructions above to setup a S3 bucket that CloudHunter will have permissions to read and ingest the GuardDuty findings.
  2. Go to the region(s) in which you have GuardDuty enabled.
  3. Go to the CloudFormation console and select “Create Stack” and upload the template you have been provided.
  4. Fill in the stack name you would like to use. You will also need to provide the ARN of the S3 bucket you created in step #1.

After that, a CloudWatch event will run every 5 minutes and export any new GuardDuty findings to the S3 bucket for you. Follow the instructions for ingesting logs from S3 buckets to ingest the findings into CloudHunter.

AWS Setup Instructions for OSQuery

osquery provides performant visibility into endpoints running inside your cloud environments. CloudHunter leverages osquery to collect the following information:

  • Process execution events, to provide visibility into what is running on your cloud instances.
  • File Integrity Monitoring events, to provide visibility into changes being made to sensitive files on your cloud instances.
  • Socket events, to provide visibility into what processes are listening for inbound networking connections and initiating outbound connections.

CloudHunter provides detections on top of this information to alert you to active threats in your cloud instances, and correlates this activity with all the other information we collect to provide a complete picture of your environment. This section describes how to setup OsQuery inside your environment.

The osquery infrastructure consists of three components:

  • A lightweight agent that runs on each of your cloud instances.
  • A collector that runs in your cloud environment to receive data from the agents.
  • An s3 bucket, into which the collector stores the data.

After you have setup osquery in your environment, you simply point CloudHunter at the s3 bucket to begin analyzing the logs.


More information about osquery could be found here:

In order to properly set up osquery in your AWS environment, perform the below procedure:

  1. Install the collector

    Sift-Aggregator is the collector that manages configuration and queries for the systems that have Sift-Agent istalled on them, collects the query results, and stores them in an S3 bucket.

    Contact to get the latest collector AMI. We recommend the following configuration:

    • Instance type: t2.large
    • Storage: 100 GB of general purpose SSD (gp2)

    This instance must be reachable from all of the instances on which you plan to install the endpoint agent, allowing inbound connections on ports 5000, 5432 and 6379. Ensure your network ACLs and Security Groups for this instance allow inbound connections on ports 5000, 5432 and 6379.


    More information about how to launch and connect to your AWS instance can be found here: Getting Started with Amazon EC2 Linux Instances

  2. Set up S3 bucket

    Once you have an access to your Sift-Aggregator instance, you will need to have a separate S3 bucket dedicated to osquery. Once you have created this bucket, you need to ensure that Sift-Aggregator has write permissions on the bucket. We recommend using an IAM role to provide this level of access. The S3 bucket should have a policy like the sample below:

        "Id": "Policy1530036597189",
        "Version": "2012-10-17",
        "Statement": [
            "Sid": "Stmt1530036578131",
            "Action": [
            "Effect": "Allow",
            "Resource": [
            "Principal": {
                "AWS": "arn:aws:iam::account-id:root"

    This bucket must also be readable by CloudHunter and must be added to the cloudhunter user or cross account role.

    On the agggregator machine, open the following file /etc/sysconfig/aggregator_env and edit the bucket path and the region according to the configurations that were setup above.

    It is recommended to use an IAM role assigned to the instance to obtain write access to the S3 bucket. However, if you must use an access key, then add the following lines to the file:


    Once you have made these changes, restart the aggregator service to update all your changes by running the following command

    sudo systemctl restart sift_aggregator.service


    More information on setting up the policy for S3 buckets could be found here: S3 Bucket Access

    More information about Instance Role could be found here: IAM Roles for Amazon EC2

  3. Add the osquery output to CloudHunter

    When you are done with setting up your S3 bucket for osquery, you can now add it to your Cloudhunter instance.

    1. Once you login into the Cloudhunter, click on “System Configuration” gear icon on the top right.
    2. On the top left, choose “Data Sources” and then click “Add Data Source” on the top right.
    3. Click on “Logs from s3” and choose “Log Type” - “osquery”.
    4. Fill out the rest of the boxes with the osquery bucket’s configurations.
  4. Install the endpoint agent

    Sift-agent needs to be installed on each client machine that you wish to run osquery on. Sift-Agent requires Python 2.7.

    In order to install Sift-Agent, execute the following commands on each of your endpoints, substituting the address of the collector instance you installed in step 1.

    curl -k -fsSL https://<address>:5000/sift_agent_installer >
    chmod u+x
    sudo ./

    This will download the Sift-Agent bundle and install it under /opt/sift_agent directory. It will also set up the client machine to start sending information to Sift-Aggregator.

    Since the client machine needs to communicate with the Aggregator machine, the same network configuration needs to be applied on it’s side as-well. This means that the same ports configuration that was set up in step 1 for Sift-Aggregator, has to be the same for the machines running Sift-Agent.

    auditd prevents agents from collecting certain events to send to aggregator, ensure that auditd is disabled by running the commands below for aggregator to recieve all events from your machine

    sudo systemctl disable auditd
    sudo service auditd stop


Ingesting Data from Azure

Supported Azure Data Sources

CloudHunter supports the following Azure sources:

  • Activity Logs
  • Authorization
  • Compute
  • Network
  • Storage
  • NSG Flow Logs
  • Storage Account Logs
  • Blobs

Required Setup in Azure

CloudHunter data is ingested directly from Azure Storage Accounts. To configure Sift Security to read data from Azure buckets:

  1. In the Azure Portal, go to More services and type Activity log and complete the following steps to export the activity logs to a storage account.
  • In the “Export activity log” section, select all the Regions and check the box to “Export to a storage account”
  • Select an existing storage account or create a new one
  • Note: It will take a least 30 minutes before you start seeing logs in the storage account.
  1. In the Azure Portal, navigate to each of your Storage Accounts you want logged.
  • Click on “Diagnostics” and enable the “Blob Logs”, “Table Logs”, and “Queue Logs”

  • Once enabled, a blob container named $logs will show up with your logging data.

  • Reads, writes, and deletes must be enabled separately. These can be enabled using the Azure CLI

    • For each storage account, run:

    az storage logging update --account-key {STORAGE_ACC_KEY} --account-name {STORAGE_ACC_NAME} --service bqt --log wrd --retention {NUM_DAYS}

    • To verify permissions, run:

      az storage logging show --account-key {STORAGE_ACC_KEY} --account-name {STORAGE_ACC_NAME}

  1. Set up Network Security Group Flow logs

    • Create a Network Watcher in (More Services > Networking > Network Watcher)

    • Enable NSG Flow Logging:

      • Register Microsoft.Insights as a provider (Subscriptions > Resource Providers. Register microsoft.insights)
      • Go to the Network Watcher you created and click on NSG Flow logs
      • Enable it for each of your NSGs, forwarding to the same bucket you created earlier
      • These logs will show up as a blob named insights-log-networksecuritygroupflowevent

Azure Setup Instructions

CloudHunter data is ingested directly from Azure Storage Buckets. To configure Sift Security to read data from Azure:

  1. Click the gear icon on the top right
  2. Click “Azure” to configure a connection to an Azure Storage Account
  3. Give the input a name to identify it in your list of inputs
  4. Select which type of data resides in this bucket (e.g., Activity Logs or other)
  5. Specify the name of the storage account
  6. Fill in a key to access this storage account

This process must be repeated for each account you wish to ingest into Sift Security. Ingestion starts immediately when the configuration is saved and may take a few hours, depending on how much data is available.

Ingestion starts immediately when the logstash configuration is saved and may take a few hours, depending on how much data is available.

Ingesting Data from Azure Security Center


The following instructions will help you setup Azure Security Center in your account and grant CloudHunter access. In order to make it possible for CloudHunter to poll Security Center, a set of keys need to be created for CloudHunter to authenticate to your account with read permissions. We have provided some scripts to help automate the process. The procedures below refer to the scripts that you will be provided to assist in the setup.

Automated Onboarding of Azure Security Center Client


  1. Make sure the installer’s tenant has Azure AD User Settings “App Registrations” set to “Yes” and Microsoft.Authorization/*/Write permissions to assign app role and Security Writer scope to enable Security Center Policies, or admin role. If not, the tenant admin needs to perform steps 2a-e available in the Manual Instructions, and run the script
  2. You will require Python 2.7 with the pip modules installed from azure_installer_requirements.txt (“pip install –r azure_installer_requirements.txt”).

How to Enable Security Center Polling:

  1. You will need to run “” for each Azure tenant. Below is a sample execution and output with no command line arguments submitted to the script, triggering the interactive prompt mode:

    $ python
    Azure Tenant ID:  [unique-name]
    Azure Username to use Delegate Permissions for Setup: user@[tenant ID]
    Enter your Azure password for the user: [password]
  2. Perform the following steps before continuing:

  1. Login to your Azure Portal
  2. Navigate to Azure Active Directory → App Registrations
  3. In the “Search by name or appID box” enter 4df500fc-9650-4c47-ad91-c468e7919aa4, and select “All Apps” from search drop down
  4. Click on the app sift-securitycenter from the search list
  5. Click on the “Settings” gear
  6. Click on “Required Permissions”
  7. Click on “Grant Permissions” and the “Yes” button on the confirmation pop-up
  1. Return to the executing script and hit “Enter” when the steps above have completed, or type “Q” to quit setup. Upon success, three values will be printed to terminal to be provided to Sift:
  • Tenant Id
  • Application Client Id
  • Client Secret Key
  1. Use these values to add an Azure Security Center data source to CloudHunter by:
  • Logging into CloudHunter
  • Clicking the gear icon in the top right corner
  • Clicking on Add Data Source in DATA SOURCES tab
  • Clicking on Azure Security Center and entering the required values obtained in step 3

Manual Instructions

Use these instructions if you wish to skip using the script and setup Security Center manually. You do not have to follow these instructions if you have already followed the Automated Onboarding of Azure Security Center Client instructions above. Manual Steps to Enable Security Center Polling:

  1. Enable Policy
  1. Go to Security Center Blade
  2. Click on subscription, to configure all Resource Groups to inherit policy setting at subscription level
  3. Under Policy Components, click on Security Policy
  1. Set all policies to “Yes”
  2. Click “Save”
  1. (optional) Under Policy Components, select Data Collection to enable VM policy alerts (Security Updates, Security Configurations, Endpoint Protection)
  1. Set Data Collection to “On”
  2. Leave default workspace selected, unless client has a OMS Workspace they want you to use instead
  3. Click “Save”
  1. Repeat steps 1.b - 1.d.iii for all subscriptions
  1. Create Application Login
  1. Go to Azure Active Directory blade
  2. Select App Registrations blade
  3. Click “New application registration”
  1. Set Name to “Sift-SecurityCenterPoller”
  2. Set application type to “Web app / API”
  3. Since this will be single tenant access app, set Sign-on URL to “http://localhost
  4. Click “Create”
  1. With new App Id in focus
  1. Write down the “Application ID”
  2. Click “Settings” Gear
  1. Under API Access, click Keys
  2. Enter Key name “Sift” into “Description” box
  3. Select “Expires” duration for access key
  4. Click “save”, and when it appears, write down Key “Value”
  1. Click “Settings” Gear
  1. Click Required permissions
  2. Click “Add”
  3. Click “1. Select an API”
  1. Choose API “Windows Azure Service Management API”
  2. Click “Select” button
  1. Click “2. Select Permissions”
  1. Check box under “Delegated Permissions”, next to “Access Azure Service Management as organization users”
  2. Click “Select” button
  1. Click “Done” button
  1. Open Subscriptions Blade
  1. Click on subscription to grant role to new AppId
  2. Click on Access Control (IAM)
  3. Click “Add”
  4. In Role box, Type “Security Reader” and select the combo box
  5. Leave “Assign access to” combo box set to “Azure AD user, group, or application
  6. In “Select” combobox enter app Id “Sift-SecurityCenterPoller”, and click it to add to Selected Member list
  7. Click “Save”
  8. Repeat steps 2.d.i - 2.e.vii for all subscriptions

Ingesting Data from GCP

To configure your GCP account for cloudhunter, you will need to -

  1. Create a service account
  2. Assign appropriate roles to service account
  3. Enable APIs
  4. Add data source to cloudhunter for ingestion

Creating a service account

To create service accounts, a user must have the Service Account Admin Role (roles/iam.serviceAccountAdmin).

Login to gcloud with a user that has appropriate permissions.

You will need to set one of the projects as the default project for the service account since a service account is required to be associated with a project. This can be done by:

gcloud config set project $PROJECT_ID

Here $PROJECT_ID is the ID of the project where the service account will exist.

A service account can be created through the GCP console or through gcloud by executing the following command:

gcloud iam service-accounts create cloudhunter-service-account --display-name "Cloudhunter-Service-Account"

The name of the service account associated with Cloudhunter should be Cloudhunter-Service-Account and its ID should be cloudhunter-service-account

Assigning roles to service account

For Cloudhunter to be able to scan resources from your GCP organization, you will have to assign roles to Cloudhunter-Service-Account. These roles will have to be provided at the Organization level to enable the service account to scan all the projects under an organization.

You must have the Organization Admin role to be able to use the above command to grant roles to the service account. Here $ORGANIZATION_ID is the Organization’s ID, $PROJECT_ID is the project ID associated with the service account, which will be the core project. You must have the Organization Admin role to be able to use the grant roles to the service account. You can grant permissions to the service accounts in either the console or via the command line.

Granting permissions in the console

Log into the Google Cloud console, and click on the IAM service. Click on the pencil icon next to Cloudhunter-Service-Account to edit it, and add the following roles to the service account:

  • App Engine / App Engine Viewer
  • Big Query / Big Query Data Viewer
  • Cloud SQL / Cloud SQL Viewer
  • Compute Engine / Compute Network Viewer
  • Compute Engine / Compute Viewer
  • IAM / Security Viewer
  • Kubernetes Engine / Kubernetes Engine Viewer
  • Logging / Logs Viewer
  • Organization Policy / Organization Policy Viewer
  • Project / Browser
  • Project / Viewer
  • Pub/Sub / Pub/Sub Viewer
  • Resource Manager / Folder Viewer
  • Resource Manager / Organization Viewer
  • Roles / Organization Role Viewer

Granting permissions on the command line

Or, if using the command line, execute the following commands:

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/appengine.appViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/bigquery.dataViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/browser

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=cloudsql.viewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/compute.networkViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/compute.viewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/container.viewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/iam.organizationRoleViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/iam.securityReviewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/logging.viewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/orgpolicy.policyViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/pubsub.viewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/resourcemanager.folderViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/resourcemanager.organizationViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/servicemanagement.quotaViewer

gcloud organizations add-iam-policy-binding ORGANIZATION_ID --member=serviceAccount:cloudhunter-service-account@$ --role=roles/viewer

Enable APIs

APIs can be enabled by using the following command:

  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable
  • gcloud beta services enable

Add GCP data source to CloudHunter

The creation of the service account, adding IAM roles and enabling APIs may take some time to propogate through gcp. Once the roles are successfully assigned to the service account and APIs are enabled, proceed by creating and downloading a new key for your service account:

gcloud iam service-accounts keys create cloudhunter-service-account.json --iam-account cloudhunter-service-account@$

The key will be saved in cloudhunter-service-account.json

After generating keys for your service account, add GCP data source to cloudhunter -

  1. Click the gear icon on the top right.
  2. Select “Add Data Source” and “GCP API”
  3. Give your source a distinct input name.
  4. Copy and paste the JSON key (complete contents of your JSON key file : cloudhunter-service-account.json).

Ingesting Data from Splunk

Sift Security natively supports any data using the Splunk Common Information Model (CIM). If you are a Splunk customer already using CIM and have your tags set appropriately, no additional work is needed to get started. To start ingesting Splunk data:

  1. Click the gear icon on the top right
  2. Click “Splunk Server” to configure a connection to a Splunk server.
  3. Enter the details of your Splunk server.


  • You should create a read-only user role in Splunk for Sift Security.
  • Ensure that your Splunk server is accessible from the Sift Security instance.
  1. Add a query to run.
  • Click on the gear icon on the top right tab, and then select “Splunk Query” to setup a query. We adhere to the Splunk Query Syntax in the query field. You can choose to run this query once, or as a recurring, real-time query using the “mode” parameter. If you want to run(or re-run) a query, mark the “Status” parameter as New.

You can also pull additional information on-demand from Splunk from the “Canvas” tab. For example, you might ingest alerts in real time, but want to pull additional information on-demand to investigate an alert.

  • Right click on an node in the canvas, and navigate to “Actions” > “Add data from Splunk”
  • Edit the form that pops-up with the required values. Query follows the Splunk Query, while the start and end times of your query default to the values of your Canvas timeline selection. You can edit both of these fields prior ro submitting the query.
  • Once the query is complete, the data will pass through the ingestion pipeline and available in Sift Security.


If you don’t have tags set in your Splunk data, you must remedy this in Splunk prior to ingestion. See Categorical Data Models for the tags that must be configured for each data source. See the Splunk Data Normalization documentation for more details of how to change tags in Splunk.

To ensure that your data are CIM formatted, use a CIM-compatible plugin to ingest your data in to Splunk.

Ingesting Data from ServiceNow

The ServiceNow Integration can be used to pull incidents, threat intelligence, vulnerabilities, organizations, assets, and users from ServiceNow. This is useful both to visually explore incidents and to provide extra context on top of log data.

On your ServiceNow Instance

  1. Install the Sift Security Integration application from the ServiceNow store.
  2. Navigate to your ServiceNow instance where the SiftSecurity Application is installed and go to the ‘UI Actions’ table. Search for Sift Security in the UI Actions table and click on it. Change the variable ‘sift_host’ to your Sift Security instance URL, and click on Update.

On your Sift Security CloudHunter Instance:

  1. Navigate to the INTEGRATIONS tab

  2. Click on ADD INTEGRATION and select Service Now

  3. Fill in the form with all the necessary information

  4. [Optional]Follow these steps to be able to share your Sift Security reports in the Incidents table in ServiceNow
    1. ssh into your Sift Security instance

    2. Edit the file using sudo vi /opt/sift/dispatcher/lib/servicenowagent/conf/credentials.conf to add your ServiceNow instance’s credentials and URL

    3. Restart the service using sudo systemctl restart sift-dispatcher

    4. You can test that these credentials work by performing the following test:
      1. From a command line, try the following command, where you replace the placeholders yourservicenowinstance, yourusername, and yourpassword with appropriate values:
      curl --user yourusername:yourpassword \
      --header "Content-Type:application/json" \
      --header "Accept: application/json" \
      --request POST \
      --data '{"short_description":"Test with CURL"}' \

Sift Security will start to pull in data from the incident table in ServiceNow. As the data gets ingested into Sift Security, users can see the data in the Risks, Canvas, and Search tabs. From ServiceNow, you can pivot to Sift Security by clicking on the Sift Security Button.


Or you can access the data (and fetch additional data) directly from Sift Security. To learn about Sift Security, walk through the product tour (click on the blue question mark in the upper right corner) and / or the Quickstart Guide (inside the Support tab).


It may be helpful to watch the video demonstration here

Advanced: Manual Configuration of Data Sources

In addition to the data sources you can configure through the UI, the following data source support can also be enabled through the editing of configuration files.

  • Anything in the CIM format using type: cim
  • CarbonBlack using type: carbonblack
  • Palo Alto Network using type: panw
  • Windows Event Logs (through nxlog) using type: windows-event
  • Windows DHCP using type: windows-dhcp

These natively supported data sources can be ingested through syslog feeds, files, TCP sockets, and other means by using logstash input filters, described in Input Filters. During configuration of your logstash input filters, use the type values from the list above. Templates for popular inputs are included with the Sift Security installation.

Advanced: Adding Support for a New Data Source

Adding support for a new data source that isn’t on the list in the preceding section and doesn’t already adhere to the CIM model involves writing a logstash config to convert it to Sift Security’s internal model. Detailed instructions of how to do this are available in Data Model.

There are additional data sources for which Sift Security has already done the prerequisite work to support, but aren’t currently available in the AMI. If you are interested in ingesting any of the following sources, we ask that you please contact Sift Security for details.

  • Bro Conn, DHCP, DNS, HTTP, SSL, and weird
  • ClearPass
  • Cisco ASA
  • Cisco IronPort
  • CriticalStack
  • Cylance
  • Cymon
  • DigitalGuardian
  • McAfee Web Gateway
  • Nginx
  • Symantec Antivirus

Visualization: Adding custom visualizations and dashboards

To add/view or edit visualizations and dashboards, procees to the search tab. Under the search tab, on the left side, you can see both ‘Visualize’ and ‘Dashboard’ options where these can be found.

** We advise that you do not edit the default dashboard and visualizations directly, but rather rename and save them. This is to ensure that future updates to these ** ** default visualizations do not overwrite your custom changes **