Zscaler ZIA logs#

When Zscaler Secure Internet and SaaS Access (ZIA) log ingestion is configured and enabled, Scrutinizer can run both preconfigured and custom reports against the collected ZIA log data.

Follow the steps below to set up a ZIA Cloud or VM-based Nanolog Streaming Service (NSS) feed for firewall logs and configure Scrutinizer to ingest the log flows.

Cloud NSS#

To enable log ingestion from a Cloud NSS feed, set up the feed in the ZIA Admin Portal before configuring Scrutinizer to download and consume the logs.

Adding/creating a Cloud NSS feed#

To create and configure the feed, log in to the ZIA Admin Portal and follow these steps:

  1. Navigate to Administration > Nanolog Streaming Service.

  2. Select the Cloud NSS Feeds tab, and then click Add Cloud NSS Feed.

  3. In the next window, configure the following (other details can be set as needed/desired):

    • NSS Type: Select NSS for Firewall.

    • SIEM Type: Select Other.

    • OAuth 2.0 Authentication: Not currently supported for Other SIEMs and can be left disabled.

    • Max Batch Size: Adjust the value to match log throughput as closely as possible (can be set lower to improve latency at the cost of network overhead).

    • API URL: Enter https://SCRUTINIZER_DOWNLOADER_IP:8888/ (address and port may be different if translation or forwarding is enabled)

    • HTTP Headers: Create a PlixerAccessToken key and note the corresponding value for later use (recommended for authentication in place of OAuth).

    • Log Type: Select Firewall Logs or DNS Logs.

    • Firewall Log Type: Select Both Session and Aggregate Logs. Some events are only provided as aggregate logs. Including both ensures that all events are sent.

    • Feed Output Type: Select JSON.

    • JSON Array Notation: Leave disabled.

    • Feed Escape Character: Enter ,\" (required to avoid unintentional termination of string data values).

    • Feed Output Format: See the below section on the JSON output format for more details.

To stream both log types (firewall and DNS), create a feed for each type.

Configuring log ingestion in Scrutinizer#

After the feed has been created, configure Scrutinizer to receive the logs as follows:

  1. In the Scrutinizer web interface, navigate to Admin > Integrations > Flow Log Ingestion.

  2. Click the + icon, and then select Zscaler ZIA in the tray.

  3. In the secondary tray, configure the following details:

    • Type: Select Cloud NSS.

    • Log Source: Select the log type that was set to be streamed.

    • Log Downloader: IP address of the Scrutinizer server/collector that will download the logs (must match the address entered for the API URL)

    • Log Downloader Port: Port to use to receive logs on the downloader (must match the port entered for API URL)

    • Worker Count: Enter 1 (can be increased if log streaming rate seems slow; recommended maximum of 10).

    • Certificate Authority: Full contents of the public API URL SSL certificate file (see this section for troubleshooting help)

    • Private Key: Full contents of the SSL certificate’s private key file (will be encrypted at rest)

    • Access Token: Value of the PlixerAccessToken created for HTTP header authentication

    • Flow Collectors: Select the Scrutinizer collector(s) that will receive the logs as standard flows (in distributed clusters, a remote collector is recommended for this role)

  4. Click the Save button to add the log stream with the current settings.

Once added, the Cloud NSS feed will be listed in the main Admin > Integrations > Flow Log Ingestion view under the downloader IP address and port configured. If multiple feeds were created to stream different ZIA log types, each one will require a matching configuration in Scrutinizer.

VM-based NSS#

When setting up log ingestion via VM-based NSS, deploy and then register the NSS VM/server in the ZIA Admin Portal before creating the feed and configuring Scrutinizer to download and consume the logs.

Adding/creating a VM-based NSS feed#

To create and configure the feed, log in to the ZIA Admin Portal and follow these steps:

  1. Navigate to Administration > Nanolog Streaming Service.

  2. Select the NSS Feeds tab, and then click Add NSS Feed.

  3. In the next window, configure the following (other details can be set as needed/desired):

    • NSS Server: Select the NSS server to be used to stream logs to Scrutinizer.

    • SIEM Destination Type: Select the method the NSS server should use to address the Scrutinizer downloader/server.

    • SIEM IP Address: Enter the Scrutinizer downloader’s IP address or FQDN (address may be different if translation or forwarding is enabled).

    • SIEM TCP Port: TCP port on the downloader to be used to receive the logs (may also be different if translation is enabled)

    • SIEM Rate: Leave on Unlimited unless Scrutinizer is overwhelmed.

    • Log Type: Select Firewall Logs or DNS Logs.

    • Firewall Log Type: Select Both Session and Aggregate Logs. Some events are only provided as aggregate logs. Including both ensures that all events are sent.

    • Feed Output Type: Select JSON.

    • Feed Escape Character: Enter ,\" (required to avoid unintentional termination of string data values).

    • Feed Output Format: See the below section on the JSON output format for more details.

    • Duplicate Logs: Select Disabled (recommended for reporting accuracy).

To stream both log types (firewall and DNS), create a feed for each type.

Configuring log ingestion in Scrutinizer#

After the feed has been created, configure Scrutinizer to receive the logs as follows:

  1. In the Scrutinizer web interface, navigate to Admin > Integrations > Flow Log Ingestion.

  2. Click the + icon, and then select Zscaler ZIA in the tray.

  3. In the secondary tray, configure the following details:

    • Type: Select VM NSS.

    • Log Source: Select the log type that was set to be streamed.

    • Log Downloader: Select the IP address of the Scrutinizer server/collector that will download the logs (must match the SIEM IP Address entered for the feed)

    • Log Downloader Port: Port to use to receive logs on the downloader (must match the SIEM TCP Port entered for the feed)

    • Worker Count: Enter 1 (can be increased if log streaming rate seems slow; recommended maximum of 10).

    • Flow Collectors: Select the Scrutinizer collector(s) that will receive the logs as standard flows (in distributed clusters, a remote collector is recommended for this role).

  4. Click the Save button to add the log stream with the current settings.

Once added, the Cloud NSS feed will be listed in the main Admin > Integrations > Flow Log Ingestion view under the downloader IP address and port configured. If multiple feeds were created to stream different ZIA log types, each one will require a matching configuration in Scrutinizer.

JSON output format#

Event logs streamed to Scrutinizer are expected in the following format:

  • One JSON object per event (separated by newlines), comprising two fields:

    • "sourcetype": Must be either "zscalernss-fw" or "zscaler-dns" (other types are discarded)

    • "event": Will contain details of the log event

  • JSON field names match names used by Zscaler.

  • Unnecessary fields can be omitted for network and storage efficiency.

DNS event fields#

The following JSON object example shows all supported fields for ZIA DNS log events. "datacenter" and "epochtime" are significant within Scrutinizer and must be included. However, other fields that will not be needed may be removed to reduce overhead and improve throughput.

The example is broken into lines and indented for readability and to make modification easier. The backslashes (\) before JSON object delimiters are so that ZScaler recognizes them as literal characters and not part of field data placeholders.

It is recommended to remove all newlines and spaces before inputting this into the NSS configuration dialog. The formatting is harmless to leave in, but it will needlessly consume extra bandwidth.

ZIA DNS log event JSON example
\{
  "sourcetype": "zscalernss-dns",
  "event": \{
    "cip": "%s{cip}",
    "cloudname": "%s{cloudname}",
    "company": "%s{company}",
    "datacentercountry": "%s{datacentercountry}",
    "datacenter": "%s{datacenter}",
    "datacentercity": "%s{datacentercity}",
    "deviceappversion": "%s{deviceappversion}",
    "devicemodel": "%s{devicemodel}",
    "devicename": "%s{devicename}",
    "deviceostype": "%s{deviceostype}",
    "deviceosversion": "%s{deviceosversion}",
    "deviceowner": "%s{deviceowner}",
    "devicetype": "%s{devicetype}",
    "dnsapp": "%s{dnsapp}",
    "dnsappcat": "%s{dnsappcat}",
    "dnsgw_flags": "%s{dnsgw_flags}",
    "dnsgw_slot": "%s{dnsgw_slot}",
    "dnsgw_srv_proto": "%s{dnsgw_srv_proto}",
    "domcat": "%s{domcat}",
    "durationms": "%d{durationms}",
    "ecs_prefix": "%s{ecs_prefix}",
    "ecs_slot": "%s{ecs_slot}",
    "edepartment": "%s{edepartment}",
    "edevicehostname": "%s{edevicehostname}",
    "eedone": "%s{eedone}",
    "elocation": "%s{elocation}",
    "elogin": "%s{elogin}",
    "epochtime": "%d{epochtime}",
    "error": "%s{error}",
    "http_code": "%s{http_code}",
    "istcp": "%d{istcp}",
    "pcapid": "%s{pcapid}",
    "protocol": "%s{protocol}",
    "recordid": "%d{recordid}",
    "req": "%s{req}",
    "reqaction": "%s{reqaction}",
    "reqrulelabel": "%s{reqrulelabel}",
    "reqtype": "%s{reqtype}",
    "res": "%s{res}",
    "resaction": "%s{resaction}",
    "respipcat": "%s{respipcat}",
    "resrulelabel": "%s{resrulelabel}",
    "restype": "%s{restype}",
    "sip": "%s{sip}",
    "sport": "%d{sport}"
  \}
\}

Firewall event fields#

The following JSON object example shows all supported fields for ZIA firewall log events. "datacenter" and "epochtime" are significant within Scrutinizer and must be included. However, other fields that will not be needed may be removed to reduce overhead and improve throughput.

The example is broken into lines and indented for readability and to make modification easier. The backslashes (\) before JSON object delimiters are so that ZScaler recognizes them as literal characters and not part of field data placeholders.

It is recommended to remove all newlines and spaces before inputting this into the NSS configuration dialog. The formatting is harmless to leave in, but it will needlessly consume extra bandwidth.

ZIA firewall log event JSON example
\{
  "sourcetype": "zscalernss-fw",
  "event": \{
    "time": "%s{time}",
    "epochtime": "%d{epochtime}",
    "csip": "%s{csip}",
    "csport": "%d{csport}",
    "cdip": "%s{cdip}",
    "cdport": "%d{cdport}",
    "cdfqdn": "%s{cdfqdn}",
    "tsip": "%s{tsip}",
    "elocation": "%s{elocation}",
    "ttype": "%s{ttype}",
    "aggregate": "%s{aggregate}",
    "srcip_country": "%s{srcip_country}",
    "threatcat": "%s{threatcat}",
    "ethreatname": "%s{ethreatname}",
    "threat_score": "%d{threat_score}",
    "threat_severity": "%s{threat_severity}",
    "ipsrulelabel": "%s{ipsrulelabel}",
    "ips_custom_signature": "%d{ips_custom_signature}",
    "sdport": "%d{sdport}",
    "sdip": "%s{sdip}",
    "ssip": "%s{ssip}",
    "ssport": "%d{ssport}",
    "ipcat": "%s{ipcat}",
    "avgduration": "%d{avgduration}",
    "durationms": "%d{durationms}",
    "numsessions": "%d{numsessions}",
    "stateful": "%s{stateful}",
    "erulelabel": "%s{erulelabel}",
    "action": "%s{action}",
    "dnat": "%s{dnat}",
    "dnatrulelabel": "%s{dnatrulelabel}",
    "recordid": "%d{recordid}",
    "pcapid": "%s{pcapid}",
    "inbytes": "%ld{inbytes}",
    "outbytes": "%ld{outbytes}",
    "nwapp": "%s{nwapp}",
    "ipproto": "%s{ipproto}",
    "destcountry": "%s{destcountry}",
    "nwsvc": "%s{nwsvc}",
    "eedone": "%s{eedone}",
    "elogin": "%s{elogin}",
    "edepartment": "%s{edepartment}",
    "edevicehostname": "%s{edevicehostname}",
    "devicemodel": "%s{devicemodel}",
    "devicename": "%s{devicename}",
    "deviceostype": "%s{deviceostype}",
    "deviceosversion": "%s{deviceosversion}",
    "deviceowner": "%s{deviceowner}",
    "deviceappversion": "%s{deviceappversion}",
    "external_deviceid": "%s{external_deviceid}",
    "ztunnelversion": "%s{ztunnelversion}",
    "bypassed_session": "%d{bypassed_session}",
    "bypass_etime": "%s{bypass_etime}",
    "flow_type": "%s{flow_type}",
    "datacenter": "%s{datacenter}",
    "datacentercity": "%s{datacentercity}",
    "datacentercountry": "%s{datacentercountry}",
    "rdr_rulename": "%s{rdr_rulename}",
    "fwd_gw_name": "%s{fwd_gw_name}",
    "zpa_app_seg_name": "%s{zpa_app_seg_name}"
  \}
\}

Troubleshooting#

If the Admin > Resources > Exporters view does not list exporters matching the log stream(s) set up for ingestion, check the following for issues:

  • Verify that the downloader was successfully configured by looking for the following message (or something similar) in /home/plixer/scrutinizer/files/logs/zscaler_log.json on that server/collector and confirm that the bind value matches the configured TCP port:

    {"level":"warn","pid":2059650,"bind":":10000","time":"2025-06-16T14:34:50.916-04:00","caller":"/builds/plixer-products/scrutinizer/application/golang/zscaler/sink-tcp-server.go:131","message":"listening for connections"}
    
  • Verify that the source log stream has been correctly configured.

  • Check the collector log file in /home/plixer/scrutinizer/files/logs/ for errors.

  • Check zscaler_log.json for other possible source-side issues.

For further assistance, contact Plixer Technical Support.

SSL certificate#

When accessing the Scrutinizer Cloud NSS endpoint using a standard browser, a blank page indicates that there are no issues.

If the page indicates SSL errors, use the option to inspect the certificate to diagnose the issue(s).

Overloaded collectors/downloaders#

The Unresourced - Enabled status in the Admin > Resources > Exporters view indicates that a log source is being temporarily disabled/paused due to insufficient resources.

The following are potential solutions for an overloaded collector:

  • If the collector is a VM, allocate additional resources (starting with CPU cores) to it.

  • If the collector is ingesting logs from only one log stream, distribute the logs across multiple streams, which can then be assigned to different collectors.

  • If the collector is ingesting logs from multiple log streams, distribute the streams across multiple collectors.

  • If the collector license has a flow rate limit, the license may need to be upgraded.

Note

  • Sources that are tagged as Disabled may have been automatically disabled (last-in/first-out order) due to the license exporter count limit.

  • In distributed deployments, it is recommended to start with a 1:1 pairing of sources and collectors.