ServiceNow Setup Guide

This guide provides step-by-step instructions for setting up the sample jobs located in the ServiceNow example folder.

Before continuing with this guide, ensure you have completed the following prerequisites:

Preparation

  1. Donwlaod or Clone the Quickstart Repository to your local machine.

  2. Navigate to the directory examples/service_now

  3. Open the setup.sh file in a text editor to update the following variables, or alternatively, set their values using environment variables:

    • CONN Specify the Snowsql connection for the Snowflake account, ensuring it points to database: SNOWEAVER and schema: LEARN_DEV. Verify that the role associated with this connection has the required privileges to execute the commands in setup.sql. Ideally, use the AccountAdmin role or an equivalent role in a sandbox environment.

    • USER The Snowflake user defined in the Snowsql connection.

    • SNOW_INSTANCE The unique identifier of your ServiceNow instance.

    • SNOW_USERNAME The user id for your test account.

    • SNOW_PASSWORD The password of your test account.

Set up the sample project

  1. On your client machine, open a command-line terminal (Linux, macOS) or Git Bash (Windows) and navigate to the same directory.

  2. Execute the following command to set up the example project:

    bash setup.sh
    

    Note

    If the LEARN_DEV project already exists, all its resources will be deleted, with the exception of the DATA stage and the RESULTS table.

    To retain your data, you can remove the -d flag from the ‘sw import all -c ${CONN} -d’’ command in the setup.sh script.

    Alternatively, use the Snoweaver CLI to create a backup of the project’s contents.

  3. After the execution completes, you can access the sample jobs in the LEARN_DEV project.

Sample jobs

snow_get_table_records

This job extracts records from a ServiceNow table using the TABLE api

Accepting input parameters:

  • table: The name of the ServiceNow table to query

  • query: The filter to apply to the query

  • limit: The maximum number of records to retrieve per batch

  • offset: The number of records to skip before retrieving the next batch

  • get_all: A boolean flag to determine if all records should be retrieved (true) or just a batch (false)

The process follows these steps:

  1. The job retrieves the first batch of records using the provided input parameters:

    • table: The name of the ServiceNow table to query

    • limit: The maximum number of records to retrieve per batch

    • sysparm_query: The filter to apply to the query

  2. In the post-hook, the job performs the following checks:

    • Verifies if the get_all flag is set to true

    • Checks if the total count of records (from the response header) is greater than the current offset plus the limit

  3. If both conditions in step 2 are met:

    • The job updates the offset by adding the current limit

    • It then calls itself recursively with the updated parameters

    • This process continues until all records have been retrieved (i.e., when the current offset plus limit exceeds the total record count)

  4. All extracted results are stored in the RESULTS table. The results are tagged with a scope label in the format snow_{{_vars.table}}

snow_create_table_record

This job creates a new record in the specified ServiceNow table using the Table API. The process involves:

Accepting input parameters:

  • table: The name of the ServiceNow table where the record will be created

  • data: A JSON object containing the field values for the new record

snow_update_table_record

This job modifies an existing record in the specified ServiceNow table using the Table API. The process includes:

Accepting input parameters:

  • table: The name of the ServiceNow table containing the record to be updated

  • sys_id: The unique identifier of the record to be updated

  • data: A JSON object containing the field values to be updated

snow_lib

check_error

This macro evaluates the status code of the response. If an error is detected, it raises an exception, providing the error message extracted from the response payload.

How to run the sample jobs

  1. To set up the Snowflake resources needed for the examples, open a Snowflake worksheet and execute the following statements:

    USE ROLE LEARN_DEV_DEVELOPER;
    USE WAREHOUSE LEARN_SW_WH;
    USE SCHEMA LEARN_SW.DEV;
    
    -- create a view for the snow_incident scope.
    CREATE OR REPLACE VIEW V_SNOW_INCIDENT_QUERY_RESULTS
    AS
    SELECT *
    FROM SNOWEAVER.LEARN_DEV.RESULTS
    WHERE SCOPE = 'snow_incident';
    
    -- create a stream on the view
    CREATE OR REPLACE STREAM
    STREAM_SNOW_INCIDENT_QUERY_RESULTS
    ON VIEW V_SNOW_INCIDENT_QUERY_RESULTS
    APPEND_ONLY = TRUE
    SHOW_INITIAL_ROWS = TRUE;
    
    -- create a table to store the incident history.
    CREATE OR REPLACE TABLE LEARN_SW.DEV.SNOW_INCIDENT_CHANGE_HISTORY (
       CHANGE_ID NUMBER AUTOINCREMENT,
       SYS_ID VARCHAR,
       INCIDENT_NUMBER VARCHAR,
       SYS_UPDATED_ON TIMESTAMP,
       SHORT_DESCRIPTION VARCHAR,
       DATA VARIANT
    );
    
  2. Create a task to extract the change history from ServiceNow by calling the SNOW_GET_TABLE_RECORDS job.

    CREATE OR REPLACE TASK SNOW_EXTRACT_INCIDENT_CHANGE_HISTORY
    WAREHOUSE = LEARN_SW_WH
    AS
    BEGIN
       -- Create a WHERE clause to verify the high-water mark condition if the target table contains data
       LET query_condition VARCHAR :=
          NVL('sys_updated_on > ' ||
              TO_VARCHAR((SELECT MAX(SYS_UPDATED_ON)
                          FROM SNOW_INCIDENT_CHANGE_HISTORY),
                         'YYYY-MM-DD"T"HH24:MI:SS'),
              '');
    
       -- Call SNOW_GET_TABLE_RECORDS to extract incident data
       CALL SNOWEAVER.LEARN_DEV.SNOW_GET_TABLE_RECORDS(
          'incident',
          :query_condition,
          30,  -- limit
          0,   -- offset
          TRUE -- get_all: Retrieve all records
       );
    
       RETURN 'Task successfully completed.';
    END;
    
    -- execute the task for the initial extract
    EXECUTE TASK SNOW_EXTRACT_INCIDENT_CHANGE_HISTORY;
    
  3. Create a task to load the new records from the stream into the change history table.

    CREATE OR REPLACE TASK SNOW_LOAD_INCIDENT_CHANGE_HISTORY
    WAREHOUSE=LEARN_SW_WH
    AS
    INSERT INTO SNOW_INCIDENT_CHANGE_HISTORY
    (SYS_ID, INCIDENT_NUMBER, SYS_UPDATED_ON,SHORT_DESCRIPTION,DATA)
    SELECT
       R.VALUE:sys_id::VARCHAR,
       R.VALUE:number::VARCHAR,
       R.VALUE:sys_updated_on::TIMESTAMP,
       R.VALUE:short_description::VARCHAR,
       R.VALUE
    FROM STREAM_SNOW_INCIDENT_QUERY_RESULTS SQR,
    LATERAL FLATTEN(input=>SQR.DATA:result) R;
    
    -- execute the task for the initial load
    EXECUTE TASK SNOW_LOAD_INCIDENT_CHANGE_HISTORY;
    
    -- wait for 20 seconds and then check the results in the target table
    SELECT * exclude(DATA),DATA:urgency FROM SNOW_INCIDENT_CHANGE_HISTORY ORDER BY SYS_UPDATED_ON DESC;
    
  4. Run the following statements to create an incident:

    -- Create a new incident
    CREATE OR REPLACE TEMPORARY TABLE SNOW_INCIDENT_CREATED
    AS
    SELECT SNOWEAVER.LEARN_DEV.SNOW_CREATE_TABLE_RECORD('incident',
    {'short_description':'Data Load Failure 1','urgency':'3'}
    ) AS RESULT;
    
    -- extract the change
    EXECUTE TASK SNOW_EXTRACT_INCIDENT_CHANGE_HISTORY;
    
  5. Run the following statements to update the newly created incident:

    SELECT SNOWEAVER.LEARN_DEV.SNOW_UPDATE_TABLE_RECORD(
       'incident',
       (SELECT RESULT:result:sys_id FROM SNOW_INCIDENT_CREATED),
       {'urgency':'2'}
    );
    
    -- extract the change
    EXECUTE TASK SNOW_EXTRACT_INCIDENT_CHANGE_HISTORY;
    
  6. Run the following statements to load the changes into the target table:

    EXECUTE TASK SNOW_LOAD_INCIDENT_CHANGE_HISTORY;
    
    -- wait for 20 seconds and then check the results in the target table
    SELECT * exclude(DATA),DATA:urgency
    FROM SNOW_INCIDENT_CHANGE_HISTORY
    ORDER BY SYS_UPDATED_ON DESC;