Zscaler ZIA logs#
When Zscaler Secure Internet and SaaS Access (ZIA) log ingestion is configured and enabled, Scrutinizer can run both preconfigured and custom reports against the collected ZIA log data.
Follow the steps below to set up a ZIA Cloud or VM-based Nanolog Streaming Service (NSS) feed for firewall logs and configure Scrutinizer to ingest the log flows.
Cloud NSS#
To enable log ingestion from a Cloud NSS feed, set up the feed in the ZIA Admin Portal before configuring Scrutinizer to download and consume the logs.
Adding/creating a Cloud NSS feed#
To create and configure the feed, log in to the ZIA Admin Portal and follow these steps:
Navigate to Administration > Nanolog Streaming Service.
Select the Cloud NSS Feeds tab, and then click Add Cloud NSS Feed.
In the next window, configure the following (other details can be set as needed/desired):
NSS Type: Select NSS for Firewall.
SIEM Type: Select Other.
OAuth 2.0 Authentication: Not currently supported for Other SIEMs and can be left disabled.
Max Batch Size: Adjust the value to match log throughput as closely as possible (can be set lower to improve latency at the cost of network overhead).
API URL: Enter
https://SCRUTINIZER_DOWNLOADER_IP:8888/(address and port may be different if translation or forwarding is enabled).HTTP Headers: Create a
PlixerAccessTokenkey and note the corresponding value for later use (recommended for authentication in place of OAuth).Log Type: Select Firewall Logs or DNS Logs.
Firewall Log Type: Scrutinizer supports both types, but Full Session Logs are recommended for maximum visibility and reporting accuracy.
Feed Output Type: Select JSON.
JSON Array Notation: Leave disabled.
Feed Escape Character: Enter
,\"(required to avoid unintentional termination of string data values).Feed Output Format: See the below section on the JSON output format for more details.
To stream both log types (firewall and DNS), create a feed for each type.
Configuring log ingestion in Scrutinizer#
After the feed has been created, configure Scrutinizer to receive the logs as follows:
In the Scrutinizer web interface, navigate to Admin > Integrations > Flow Log Ingestion.
Click the + icon, and then select Zscaler ZIA in the tray.
In the secondary tray, configure the following details:
Type: Select Cloud NSS.
Log Source: Select the log type that was set to be streamed.
Log Downloader: IP address of the Scrutinizer server/collector that will download the logs (must match the address entered for the API URL)
Log Downloader Port: Port to use to receive logs on the downloader (must match the port entered for API URL)
Worker Count: Enter 1 (can be increased if log streaming rate seems slow; recommended maximum of 10).
Certificate Authority: SSL certificate issued to the public API URL (see this section for troubleshooting help)
Private Key: SSL certificate’s private key (will be encrypted at rest)
Access Token: Value of the
PlixerAccessTokencreated for HTTP header authenticationFlow Collectors: Select the Scrutinizer collector(s) that will receive the logs as standard flows.
Click the Save button to add the log stream with the current settings.
Once added, the Cloud NSS feed will be listed in the main Admin > Integrations > Flow Log Ingestion view under the downloader IP address and port configured. If multiple feeds were created to stream different ZIA log types, each one will require a matching configuration in Scrutinizer.
VM-based NSS#
When setting up log ingestion via VM-based NSS, deploy and then register the NSS VM/server in the ZIA Admin Portal before creating the feed and configuring Scrutinizer to download and consume the logs.
Adding/creating a VM-based NSS feed#
To create and configure the feed, log in to the ZIA Admin Portal and follow these steps:
Navigate to Administration > Nanolog Streaming Service.
Select the NSS Feeds tab, and then click Add NSS Feed.
In the next window, configure the following (other details can be set as needed/desired):
NSS Server: Select the NSS server to be used to stream logs to Scrutinizer.
SIEM Destination Type: Select the method the NSS server should use to address the Scrutinizer downloader/server.
SIEM IP Address: Enter the downloader’s IP address or FQDN (address may be different if translation or forwarding is enabled).
SIEM TCP Port: TCP port on the downloader to be used to receive the logs (may also be different if translation is enabled)
SIEM Rate: Leave on Unlimited unless Scrutinizer is overwhelmed.
Log Type: Select Firewall Logs or DNS Logs.
Firewall Log Type: Scrutinizer supports both types, but Full Session Logs are recommended for maximum visibility.
Feed Output Type: Select JSON.
Feed Escape Character: Not required
Feed Output Format: See the below section on the JSON output format for more details.
Duplicate Logs: Select Disabled (recommended for reporting accuracy).
To stream both log types (firewall and DNS), create a feed for each type.
Configuring log ingestion in Scrutinizer#
After the feed has been created, configure Scrutinizer to receive the logs as follows:
In the Scrutinizer web interface, navigate to Admin > Integrations > Flow Log Ingestion.
Click the + icon, and then select Zscaler ZIA in the tray.
In the secondary tray, configure the following details:
Type: Select VM NSS.
Log Source: Select the log type that was set to be streamed.
Log Downloader: Select the IP address of the Scrutinizer server/collector that will download the logs (must match the SIEM IP Address entered for the feed)
Log Downloader Port: Port to use to receive logs on the downloader (must match the SIEM TCP Port entered for the feed)
Worker Count: Enter 1 (can be increased if log streaming rate seems slow; recommended maximum of 10).
Flow Collectors: Select the Scrutinizer collector(s) that will receive the logs as standard flows.
Click the Save button to add the log stream with the current settings.
Once added, the Cloud NSS feed will be listed in the main Admin > Integrations > Flow Log Ingestion view under the downloader IP address and port configured. If multiple feeds were created to stream different ZIA log types, each one will require a matching configuration in Scrutinizer.
JSON output format#
Event logs streamed to Scrutinizer are expected in the following format:
One JSON object per event (separated by newlines), comprising two fields:
"sourcetype": Must be either"zscalernss-fw"or"zscaler-dns"(other types are discarded)"event": Will contain details of the log event
JSON field names match names used by Zscaler.
Unnecessary fields can be omitted for network and storage efficiency.
Note
JSON object delimiters are escaped with backslashes (\) within Zscaler’s NSS feed dialogs.
DNS event fields#
The following JSON object example shows all supported fields for ZIA DNS log events. The object is broken into lines, and escape characters are omitted for readability.
ZIA DNS log event JSON example
{
"sourcetype": "zscalernss-dns",
"event": {
"cip": "%s{cip}",
"cloudname": "%s{cloudname}",
"company": "%s{company}",
"datacentercountry": "%s{datacentercountry}",
"datacenter": "%s{datacenter}",
"datacentercity": "%s{datacentercity}",
"deviceappversion": "%s{deviceappversion}",
"devicemodel": "%s{devicemodel}",
"devicename": "%s{devicename}",
"deviceostype": "%s{deviceostype}",
"deviceosversion": "%s{deviceosversion}",
"deviceowner": "%s{deviceowner}",
"devicetype": "%s{devicetype}",
"dnsapp": "%s{dnsapp}",
"dnsappcat": "%s{dnsappcat}",
"dnsgw_flags": "%s{dnsgw_flags}",
"dnsgw_slot": "%s{dnsgw_slot}",
"dnsgw_srv_proto": "%s{dnsgw_srv_proto}",
"domcat": "%s{domcat}",
"durationms": "%d{durationms}",
"ecs_prefix": "%s{ecs_prefix}",
"ecs_slot": "%s{ecs_slot}",
"edepartment": "%s{edepartment}",
"edevicehostname": "%s{edevicehostname}",
"eedone": "%s{eedone}",
"elocation": "%s{elocation}",
"elogin": "%s{elogin}",
"epochtime": "%d{epochtime}",
"error": "%s{error}",
"http_code": "%s{http_code}",
"istcp": "%d{istcp}",
"pcapid": "%s{pcapid}",
"protocol": "%s{protocol}",
"recordid": "%d{recordid}",
"req": "%s{req}",
"reqaction": "%s{reqaction}",
"reqrulelabel": "%s{reqrulelabel}",
"reqtype": "%s{reqtype}",
"res": "%s{res}",
"resaction": "%s{resaction}",
"respipcat": "%s{respipcat}",
"resrulelabel": "%s{resrulelabel}",
"restype": "%s{restype}",
"sip": "%s{sip}",
"sport": "%d{sport}"
}
}
Firewall event fields#
The following JSON object example shows all supported fields for ZIA firewall log events. The object is broken into lines, and escape characters are omitted for readability.
ZIA firewall log event JSON example
{
"sourcetype": "zscalernss-fw",
"event": {
"time": "%s{time}",
"epochtime": "%d{epochtime}",
"csip": "%s{csip}",
"csport": "%d{csport}",
"cdip": "%s{cdip}",
"cdport": "%d{cdport}",
"cdfqdn": "%s{cdfqdn}",
"tsip": "%s{tsip}",
"elocation": "%s{elocation}",
"ttype": "%s{ttype}",
"aggregate": "%s{aggregate}",
"srcip_country": "%s{srcip_country}",
"threatcat": "%s{threatcat}",
"ethreatname": "%s{ethreatname}",
"threat_score": "%d{threat_score}",
"threat_severity": "%s{threat_severity}",
"ipsrulelabel": "%s{ipsrulelabel}",
"ips_custom_signature": "%d{ips_custom_signature}",
"sdport": "%d{sdport}",
"sdip": "%s{sdip}",
"ssip": "%s{ssip}",
"ssport": "%d{ssport}",
"ipcat": "%s{ipcat}",
"avgduration": "%d{avgduration}",
"durationms": "%d{durationms}",
"numsessions": "%d{numsessions}",
"stateful": "%s{stateful}",
"erulelabel": "%s{erulelabel}",
"action": "%s{action}",
"dnat": "%s{dnat}",
"dnatrulelabel": "%s{dnatrulelabel}",
"recordid": "%d{recordid}",
"pcapid": "%s{pcapid}",
"inbytes": "%ld{inbytes}",
"outbytes": "%ld{outbytes}",
"nwapp": "%s{nwapp}",
"ipproto": "%s{ipproto}",
"destcountry": "%s{destcountry}",
"nwsvc": "%s{nwsvc}",
"eedone": "%s{eedone}",
"elogin": "%s{elogin}",
"edepartment": "%s{edepartment}",
"edevicehostname": "%s{edevicehostname}",
"devicemodel": "%s{devicemodel}",
"devicename": "%s{devicename}",
"deviceostype": "%s{deviceostype}",
"deviceosversion": "%s{deviceosversion}",
"deviceowner": "%s{deviceowner}",
"deviceappversion": "%s{deviceappversion}",
"external_deviceid": "%s{external_deviceid}",
"ztunnelversion": "%s{ztunnelversion}",
"bypassed_session": "%d{bypassed_session}",
"bypass_etime": "%s{bypass_etime}",
"flow_type": "%s{flow_type}",
"datacenter": "%s{datacenter}",
"datacentercity": "%s{datacentercity}",
"datacentercountry": "%s{datacentercountry}",
"rdr_rulename": "%s{rdr_rulename}",
"fwd_gw_name": "%s{fwd_gw_name}",
"zpa_app_seg_name": "%s{zpa_app_seg_name}"
}
}
SSL certificate troubleshooting#
When accessing the Scrutinizer Cloud NSS endpoint using a standard browser, a blank page indicates that there are no issues.
If the page indicates SSL errors, use the option to inspect the certificate to diagnose the issue(s).
Troubleshooting#
MFSNs and a buildup of log files in flow log source containers are indications that the rate of flow and/or log generation exceeds the capacity of the collector assigned to the flow log source.
The following are potential solutions for an overloaded collector:
If the collector is a VM, allocate additional resources (starting with CPU cores) to it.
If the collector is ingesting flow logs from only one source (bucket or container), distribute the logs across multiple sources, which can then be assigned to different collectors.
If the collector is ingesting flow logs from multiple sources, reassign sources across multiple collectors.
If the collector license has a flow rate limit, the license may need to be upgraded.
Note
In distributed deployments, it is recommended to start with a 1:1 pairing of sources and collectors.
The Unresourced - Enabled status in the Admin > Resources > Exporters view is another indication that flow log sources are being temporarily disabled/paused due to insufficient resources.
If the Admin > Resources > Exporters view does not list exporters that are associated with the virtual network(s) set up for flow ingestion, do the following:
Navigate to Admin > Integrations > Flow Ingestion, open the configuration tray for the collector it was assigned to, and then use the Test button to verify that the correct details were entered.
Note
The Test button only checks if the communication with the data source works.
Verify that flow logs are correctly being sent to the bucket or container.
Check the collector log file in
/home/plixer/scrutinizer/files/logs/for errors.Check
awss3_log.json(AWS),azure_log.json(Azure), orocist_log.jsonfor possible source-side issues.
Note
The Admin > Resources > Exporters view also displays exporters that have been disabled. Because each AWS, Azure, or OCI flow log source counts as an exporter, one or more sources may be disabled automatically (in last-in/first-out order) if the exporter count limit of the current license is reached.