Environment sizing#

To ensure consistently optimal performance and continuous availability, Scrutinizer must be provisioned based on the functions and/or features required by its users.

This section outlines the recommended procedures for calculating the appropriate resource allotments for Scrutinizer deployments.

Note

Certain steps in these guides require access to the Scrutinizer web interface. For more accurate results, complete the initial setup wizard beforehand.

On this page:

CPU/RAM
CPU/RAM
Storage
Storage
Distributed clusters
Distributed clusters
Plixer ML Engine
ML Engine

CPU/RAM#

Follow the steps described in this section to calculate the total number of CPU cores and amount of RAM that should be allocated to a Scrutinizer deployment.

Note

For additional guidelines related to distributed clusters, see this section.

Basic sizing#

  1. Use the recommendations in the table below as starting CPU core count and RAM values. These allocations cover Scrutinizer’s core functions (flow collection, reporting, basic alarm policies) for the expected flow rates and exporter counts indicated.

    CPU cores and RAM based on flow rate and exporter count

    Flows/s

    Exporters

    5

    25

    50

    100

    200

    300

    400

    500

    5k

    8 CPU cores
    16 GB RAM

    8 CPU cores
    16 GB RAM

    10 CPU cores
    20 GB RAM

    14 CPU cores
    28 GB RAM

    20 CPU cores
    39 GB RAM

    26 CPU cores
    52 GB RAM

    32 CPU cores
    67 GB RAM

    38 CPU cores
    82 GB RAM

    10k

    8 CPU cores
    16 GB RAM

    8 CPU cores
    16 GB RAM

    12 CPU cores
    24 GB RAM

    18 CPU cores
    36 GB RAM

    25 CPU cores
    50 GB RAM

    32 CPU cores
    65 GB RAM

    38 CPU cores
    81 GB RAM

    43 CPU cores
    97 GB RAM

    20k

    16 CPU cores
    32 GB RAM

    16 CPU cores
    32 GB RAM

    16 CPU cores
    32 GB RAM

    24 CPU cores
    48 GB RAM

    32 CPU cores
    64 GB RAM

    38 CPU cores
    80 GB RAM

    43 CPU cores
    96 GB RAM

    48 CPU cores
    112 GB RAM

    50k

    32 CPU cores
    64 GB RAM

    32 CPU cores
    64 GB RAM

    32 CPU cores
    64 GB RAM

    32 CPU cores
    64 GB RAM

    39 CPU cores
    80 GB RAM

    44 CPU cores
    96 GB RAM

    48 CPU cores
    112 GB RAM

    52 CPU cores
    128 GB RAM

    75k

    46 CPU cores
    96 GB RAM

    46 CPU cores
    96 GB RAM

    46 CPU cores
    96 GB RAM

    46 CPU cores
    96 GB RAM

    46 CPU cores
    96 GB RAM

    49 CPU cores
    112 GB RAM

    52 CPU cores
    128 GB RAM

    55 CPU cores
    144 GB RAM

    100k

    52 CPU cores
    128 GB RAM

    52 CPU cores
    128 GB RAM

    52 CPU cores
    128 GB RAM

    52 CPU cores
    128 GB RAM

    52 CPU cores
    128 GB RAM

    52 CPU cores
    128 GB RAM

    55 CPU cores
    144 GB RAM

    58 CPU cores
    160 GB RAM

    125k

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    58 CPU cores
    160 GB RAM

    61 CPU cores
    176 GB RAM

    150k

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

    64 CPU cores
    192 GB RAM

Feature sizing#

  1. Following the table below, compute for the total expected CPU and RAM usage for all feature sets that will be enabled.

    Feature resource requirements and FA algorithms

    Feature

    CPU (cores)

    RAM (GB)

    FA Algorithms

    Streaming (to a Plixer ML Engine or external data lake)

    1

    0.4

    N/A

    Basic Tuple Analysis

    5.85

    3.3

    - DNS Hits
    - FIN Scan
    - Host Reputation
    - ICMP Destination Unreachable
    - ICMP Port Unreachable
    - Large Ping
    - Odd TCP Flags Scan
    - P2P Detection
    - Packet Flood
    - Ping Flood
    - Ping Scan
    - Reverse SSH Shell
    - RST/ACK Detection
    - SYN Scan
    - TCP Scan
    - Network Transports
    - UDP Scan
    - XMAS Scan

    Application Analysis

    0.25

    0.1

    - Protocol Misdirection

    Worm Analysis

    0.5

    0.2

    - Lateral Movement Attempt
    - Lateral Movement

    FlowPro DNS Exfiltration Analysis

    0.5

    0.2

    - DNS Command and Control Detection
    - DNS Data Leak Detection

    FlowPro DNS Basic Analysis

    0.25

    0.1

    - BotNet Detection

    JA3 Analysis

    0.25

    0.1

    - JA3 Fingerprinting

    FlowPro DNS Server Analysis

    0.25

    0.1

    - DNS Server Detection

    FlowPro Domain Reputation Analysis

    0.25

    0.1

    - Domain Reputation

    Firewall Event Analysis

    0.25

    0.1

    - Denied Flows Firewall

    Scan Analysis

    1.0

    0.4

    - Bogon Traffic
    - Breach Attempt Detection
    - NULL Scan
    - Source Equals Destination

    Jitter Analysis

    0.25

    0.1

    - Medianet Jitter Violations

    DNS Lookup Analysis

    0.25

    0.1

    - NetFlow Domain Reputation

    DoS Analysis

    0.5

    0.2

    - DDoS Detection
    - DRDoS Detection

    Host Index Analysis

    2.4

    2.4

    - Host Watchlist
    - Incident Correlation
    - IP Address Violations

    Note

    • Each FA algorithm reports detections using one or more alarm policies, which are also enabled/disabled as part of the feature set. Policy-to-algorithm associations can be viewed in the Admin > Alarm Monitor > Alarm Policies view.

    • The CPU and RAM allocations per feature are recommended for deployments with up to 500 exporters and a total flow rate of 150,000 flows/s.

  2. Combine the values obtained from steps 1 and 2, and apply any necessary adjustments to the CPU and RAM allocations for the Scrutinizer appliance.

  3. In the web interface, navigate to Admin > Resources > System Performance and verify that the correct CPU core count and RAM amount are displayed for the collector.

  4. After confirming that CPU and RAM allocations have been correctly applied, go to Admin > Resources > System Performance and enable/disable features according to the selections made for step 2.

Once Scrutinizer is fully configured and running, CPU and RAM utilization can be monitored from the Admin > Resources > System Peformance page using the CPU Utilization and Available Memory graphs. These graphs should be reviewed regularly (in addition to after resources are initially allocated), so that any necessary adjustments can be made.

Important

After making any adjustments to Scrutinizer’s resource allocations, launch scrut_util as the root user and run the set tuning command to re-tune the appliance.

Alarm policies under the System category are also used to report events related to resource utilization (e.g. collection paused/resumed, feature set paused/resumed, etc.)

Additional factors#

In addition to the considerations mentioned above, there are other factors that can impact performance in Scrutinizer, such as the number/complexity of notification profiles in use, the number of report thresholds configured, and the number of scheduled email reports that have been set up. It is recommended to regularly review the Admin > Resources > System Performance page to ensure that resource utilization remains within acceptable values.

Storage#

The Admin > Resources > System Performance page of the web interface summarizes disk utilization for individual collectors in a Scrutinizer environment. A more detailed view that shows actual and expected storage use for historical flow data can also be accessed by drilling into a specific collector.

This section discusses the main factors that influence a Scrutinizer collector’s disk use and provides instructions for anticipating additional storage needs.

Data retention#

Scrutinizer’s data history settings can be used to adjust how long Scrutinizer stores aggregated flow data, alarm/event details, and other data. With the default settings, a collector provisioned with the minimum 100 GB of storage can store up to 30 days of NetFlow V5 data for a maximum of 25 flow-exporting devices with a combined flow rate of 1,500 flows/s.

For more accurate and detailed projections of disk space requirements based on specific data retention settings, the following database size calculator can be accessed from the data history settings tray:

Database size calculator

The calculator shows both current and predicted disk usage for each historical flow data interval based on the retention times entered. Details are shown by collector, with total predicted usage and total storage currently available also included.

Note

  • More detailed storage utilization information can be accessed by drilling into a collector from the Admin > Resources > System Performance page.

  • Scrutinizer’s functions are highly I/O intensive, and there are many factors that can impact the system’s disk-based performance, such as the size/complexity of flows being received and flow cardinality. To ensure optimal performance, 15k HDDs or SSDs in a RAID 10 are recommended.

Auto-trimming#

Scrutinizer automatically trims older historical flow data when available disk space falls below the Minimum Percent Free Disk Space Before Trimming value configured in the data history settings.

Auto-trimming can be disabled by unticking the Auto History Trimming checkbox, but flow collection and other functions may be paused when available storage runs low. The amount of storage for the collector can also be increased to retain older records.

Host indexing#

When host indexing is enabled, it may become necessary to allocate additional storage, CPU cores, and RAM to Scrutinizer collectors.

Host to host indexing can have a significant impact on disk utilization due to the two types of records stored:

  • Continuously active pairs, for whom records will not expire

  • Ephemeral unique pairs, for whom records will expire but are also replaced at approximately the same rate

Disk space calculations#

To approximate the amount of additional disk space that will be used by the host to host index:

  1. Create/run a new a Host to Host pair report and add all exporters that were defined as inclusions for the Host Indexing FA algorithm.

  2. Set the time window to cover a period of at least 24 hours.

  3. When the output of the report is displayed, click the gear button to open the Options tray and select Global.

  4. In the secondary tray, select the 5m option from the Data Source dropdown and click Apply before returning to the main view.

  5. Note the total result count, which will be roughly equivalent to the number of active pairs.

  6. Return to the Options > Global tray and switch to the 1m data source option.

  7. Subtract the previous result count from the updated total result count to determine the number of ephemeral pairs.

After obtaining the active pair and ephemeral pair counts, the following formula can be used to calculate additional disk space requirements for host to host indexing:

(Active pair count + Ephemeral pair count) * Exporter count * 200 B

where Exporter count corresponds to the total number of exporters/inclusions defined for the Host Indexing algorithm.

Utilization alerts#

If the combined disk space used by the host and host pair databases reaches 100% of the Host Index Max Disk Space setting of the Host Indexing algorithm, host and host to host indexing will be suspended until storage becomes available again.

The following alarm policies are used to alert users to high disk utilization by host indexing:

Policy

Description

Host Index Disk Space Warning

Triggered when the disk space used by host indexing functions reaches/exceeds 75% of the specified Host Index Max Disk Space

Host Index Disk Space Error

Triggered when host indexing functions are suspended because the Host Index Max Disk Space has been reached

Host Index Disk Availability Error

Triggered when host indexing functions are suspended because disk utilization for the volume the host and host pair databases are stored on has reached/exceeded 90%

Host indexing functions will automatically restart once sufficient storage is available, either due to record expiry or because disk space has been added.

Distributed clusters#

Distributed configurations consisting of one primary reporting server and multiple remote collectors allow Scrutinizer to scale beyond the single-appliance ceiling of 500 exporters with a total flow rate of 150,000 flows/s.

This section contains resource allocation guidelines and recommendations for individual appliances in a distributed cluster.

Remote collectors#

In a distributed environment, resource allocation for each remote collector should follow the same guidelines/recommendations as that of a single Scrutinizer appliance:

  1. Use the expected flow rate and exporter count for the collector to determine recommended CPU and RAM allocations for core functions.

  2. Calculate the total additional CPU cores and RAM required to support the features that will be enabled for the collector and exporters associated with it.

  3. Provision the collector with the minimum 100 GB of disk space and the total CPU and RAM obtained from the first two steps.

After the collector has been registered as part of the cluster and is receiving flows, continue to monitor resource utilization via Admin > Resources > System Performance page and make adjustments when necessary.

Primary reporter#

CPU and RAM requirements for the primary reporter in a distributed environment are primarily based on the number of remote collectors in the cluster:

Resource

Minimum

Recommended

CPU cores

2x the number of remote collectors

4x the number of remote collectors

RAM

2 GB for every remote collector

4 GB for every remote collector

Note

  • The CPU core and RAM allocations above are exclusive of the base resource requirements for the virtual appliance.

  • Depending on the scale of the network, the primary reporter may be subject to additional load due to the volume of alarms/events being forwarded by the collectors.

ML Engine#

Deployed as part of Plixer One Enterprise, the Plixer ML Engine is a supplementary appliance that provides advanced anomaly and threat detection through Scrutinizer.

The following subsections contain sizing guidelines for local and cloud-based ML Engine deployments:

Hint

Sizing recommendations for the ML Engine are based on flow rates and asset counts. An “asset” is either an exporter interface or a host.

Local deployments#

The following table shows the recommended resource allocations for a local Plixer ML Engine install:

ML Engine local deployment sizing table

Flows/s

Number of assets

150

300

450

600

750

900

1050

1200

1450

1700

10k

8 CPU cores
40 GB RAM
0.2 TB disk

12 CPU cores
80 GB RAM
0.4 TB disk

16 CPU cores
112 GB RAM
0.6 TB disk

20 CPU cores
136 GB RAM
0.8 TB disk

24 CPU cores
160 GB RAM
1.0 TB disk

28 CPU cores
184 GB RAM
1.2 TB disk

32 CPU cores
208 GB RAM
1.4 TB disk

36 CPU cores
232 GB RAM
1.6 TB disk

40 CPU cores
256 GB RAM
1.8 TB disk

44 CPU cores
256 GB RAM
2.0 TB disk

20k

12 CPU cores
80 GB RAM
0.4 TB disk

14 CPU cores
112 GB RAM
0.6 TB disk

18 CPU cores
136 GB RAM
0.8 TB disk

22 CPU cores
160 GB RAM
1.0 TB disk

26 CPU cores
184 GB RAM
1.2 TB disk

30 CPU cores
208 GB RAM
1.4 TB disk

34 CPU cores
232 GB RAM
1.6 TB disk

38 CPU cores
244 GB RAM
1.8 TB disk

42 CPU cores
256 GB RAM
2.0 TB disk

46 CPU cores
288 GB RAM
2.2 TB disk

30k

16 CPU cores
112 GB RAM
0.6 TB disk

18 CPU cores
136 GB RAM
0.8 TB disk

20 CPU cores
160 GB RAM
1.0 TB disk

24 CPU cores
184 GB RAM
1.2 TB disk

28 CPU cores
208 GB RAM
1.4 TB disk

32 CPU cores
232 GB RAM
1.6 TB disk

36 CPU cores
244 GB RAM
1.8 TB disk

40 CPU cores
256 GB RAM
2.0 TB disk

44 CPU cores
288 GB RAM
2.2 TB disk

48 CPU cores
320 GB RAM
2.4 TB disk

40k

20 CPU cores
136 GB RAM
0.8 TB disk

22 CPU cores
160 GB RAM
1.0 TB disk

24 CPU cores
184 GB RAM
1.2 TB disk

26 CPU cores
208 GB RAM
1.4 TB disk

30 CPU cores
232 GB RAM
1.6 TB disk

34 CPU cores
244 GB RAM
1.8 TB disk

38 CPU cores
256 GB RAM
2.0 TB disk

42 CPU cores
288 GB RAM
2.2 TB disk

46 CPU cores
320 GB RAM
2.4 TB disk

50 CPU cores
352 GB RAM
2.6 TB disk

50k

24 CPU cores
160 GB RAM
1.0 TB disk

26 CPU cores
184 GB RAM
1.2 TB disk

28 CPU cores
208 GB RAM
1.4 TB disk

30 CPU cores
232 GB RAM
1.6 TB disk

32 CPU cores
244 GB RAM
1.8 TB disk

36 CPU cores
256 GB RAM
2.0 TB disk

40 CPU cores
288 GB RAM
2.2 TB disk

44 CPU cores
320 GB RAM
2.4 TB disk

48 CPU cores
352 GB RAM
2.6 TB disk

52 CPU cores
384 GB RAM
2.8 TB disk

60k

28 CPU cores
184 GB RAM
1.2 TB disk

30 CPU cores
208 GB RAM
1.4 TB disk

32 CPU cores
232 GB RAM
1.6 TB disk

34 CPU cores
244 GB RAM
1.8 TB disk

36 CPU cores
256 GB RAM
2.0 TB disk

38 CPU cores
288 GB RAM
2.2 TB disk

42 CPU cores
320 GB RAM
2.4 TB disk

46 CPU cores
352 GB RAM
2.6 TB disk

50 CPU cores
384 GB RAM
2.8 TB disk

54 CPU cores
416 GB RAM
3.0 TB disk

70k

32 CPU cores
208 GB RAM
1.4 TB disk

34 CPU cores
232 GB RAM
1.6 TB disk

36 CPU cores
244 GB RAM
1.8 TB disk

38 CPU cores
256 GB RAM
2.0 TB disk

40 CPU cores
288 GB RAM
2.2 TB disk

42 CPU cores
352 GB RAM
2.4 TB disk

46 CPU cores
352 GB RAM
2.6 TB disk

50 CPU cores
384 GB RAM
2.8 TB disk

54 CPU cores
448 GB RAM
3.0 TB disk

56 CPU cores
448 GB RAM
3.2 TB disk

80k

36 CPU cores
232 GB RAM
1.6 TB disk

38 CPU cores
256 GB RAM
1.8 TB disk

40 CPU cores
288 GB RAM
2.0 TB disk

42 CPU cores
320 GB RAM
2.2 TB disk

44 CPU cores
352 GB RAM
2.4 TB disk

46 CPU cores
384 GB RAM
2.6 TB disk

50 CPU cores
416 GB RAM
2.8 TB disk

54 CPU cores
448 GB RAM
3.0 TB disk

56 CPU cores
480 GB RAM
3.2 TB disk

56 CPU cores
480 GB RAM
3.4 TB disk

90k

40 CPU cores
256 GB RAM
1.8 TB disk

42 CPU cores
288 GB RAM
2.0 TB disk

44 CPU cores
320 GB RAM
2.2 TB disk

46 CPU cores
352 GB RAM
2.4 TB disk

48 CPU cores
384 GB RAM
2.6 TB disk

52 CPU cores
416 GB RAM
2.8 TB disk

54 CPU cores
448 GB RAM
3.0 TB disk

56 CPU cores
480 GB RAM
3.2 TB disk

56 CPU cores
512 GB RAM
3.4 TB disk

56 CPU cores
512 GB RAM
3.6 TB disk

100k

44 CPU cores
256 GB RAM
2.0 TB disk

46 CPU cores
288 GB RAM
2.2 TB disk

48 CPU cores
320 GB RAM
2.4 TB disk

50 CPU cores
352 GB RAM
2.6 TB disk

52 CPU cores
384 GB RAM
2.8 TB disk

54 CPU cores
416 GB RAM
3.0 TB disk

56 CPU cores
448 GB RAM
3.2 TB disk

56 CPU cores
480 GB RAM
3.4 TB disk

56 CPU cores
512 GB RAM
3.6 TB disk

56 CPU cores
512 GB RAM
3.6 TB disk

AWS deployments#

When deploying the Plixer ML Engine as an AWS AMI, use the following table to determine the appropriate instance type and amount of storage:

Instance type and Elastic Block Storage (EBS) size based on flow rate and asset count

Flows/s

Number of assets

150

300

450

600

750

900

1050

1200

1450

1700

10k

r5a.2xlarge
0.2 TB disk

r5a.4xlarge
0.4 TB disk

r5a.4xlarge
0.6 TB disk

r5a.8xlarge
0.8 TB disk

r5a.8xlarge
1.0 TB disk

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.12xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

20k

r5a.4xlarge
0.4 TB disk

r5a.4xlarge
0.6 TB disk

r5a.8xlarge
0.8 TB disk

r5a.8xlarge
1.0 TB disk

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.12xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

30k

r5a.4xlarge
0.6 TB disk

r5a.8xlarge
0.8 TB disk

r5a.8xlarge
1.0 TB disk

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.8xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

40k

r5a.8xlarge
0.8 TB disk

r5a.8xlarge
1.0 TB disk

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.8xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.16xlarge
2.6 TB disk

50k

r5a.8xlarge
1.0 TB disk

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.8xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.12xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

60k

r5a.8xlarge
1.2 TB disk

r5a.8xlarge
1.4 TB disk

r5a.8xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.12xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

r5a.16xlarge
3.0 TB disk

70k

r5a.8xlarge
1.4 TB disk

r5a.12xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.12xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

r5a.16xlarge
3.0 TB disk

r5a.16xlarge
3.2 TB disk

80k

r5a.12xlarge
1.6 TB disk

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.12xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

r5a.16xlarge
3.0 TB disk

r5a.16xlarge
3.2 TB disk

r5a.16xlarge
3.4 TB disk

90k

r5a.12xlarge
1.8 TB disk

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.12xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

r5a.16xlarge
3.0 TB disk

r5a.16xlarge
3.2 TB disk

r5a.16xlarge
3.4 TB disk

r5a.16xlarge
3.6 TB disk

100k

r5a.12xlarge
2.0 TB disk

r5a.12xlarge
2.2 TB disk

r5a.12xlarge
2.4 TB disk

r5a.16xlarge
2.6 TB disk

r5a.16xlarge
2.8 TB disk

r5a.16xlarge
3.0 TB disk

r5a.16xlarge
3.2 TB disk

r5a.16xlarge
3.4 TB disk

r5a.16xlarge
3.6 TB disk

r5a.16xlarge
3.6 TB disk

Azure deployments#

When deploying the Plixer ML Engine as an Azure VM image, use the following table to determine the appropriate VM size and amount of storage:

VM and Azure Disk Storage (ADS) sizes based on flow rate and asset count

Flows/s

Number of assets

150

300

450

600

750

900

1050

1200

1450

1700

10k

Standard_D13_v2
0.2 TB disk

Standard_D14_v2
0.4 TB disk

Standard_D14_v2
0.6 TB disk

Standard_E20_v5
0.8 TB disk

Standard_E20_v5
1.0 TB disk

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

20k

Standard_D14_v2
0.4 TB disk

Standard_D14_v2
0.6 TB disk

Standard_E20_v5
0.8 TB disk

Standard_E20_v5
1.0 TB disk

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

30k

Standard_D14_v2
0.6 TB disk

Standard_E20_v5
0.8 TB disk

Standard_E20_v5
1.0 TB disk

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

40k

Standard_E20_v5
0.8 TB disk

Standard_E20_v5
1.0 TB disk

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E64_v5
2.6 TB disk

50k

Standard_E20_v5
1.0 TB disk

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E48_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

60k

Standard_E32_v5
1.2 TB disk

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E48_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

Standard_E64_v5
3.0 TB disk

70k

Standard_E32_v5
1.4 TB disk

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E48_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

Standard_E64_v5
3.0 TB disk

Standard_E64_v5
3.2 TB disk

80k

Standard_E32_v5
1.6 TB disk

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E48_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

Standard_E64_v5
3.0 TB disk

Standard_E64_v5
3.2 TB disk

Standard_E64_v5
3.4 TB disk

90k

Standard_E48_v5
1.8 TB disk

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E48_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

Standard_E64_v5
3.0 TB disk

Standard_E64_v5
3.2 TB disk

Standard_E64_v5
3.4 TB disk

Standard_E64_v5
3.6 TB disk

100k

Standard_E48_v5
2.0 TB disk

Standard_E48_v5
2.2 TB disk

Standard_E48_v5
2.4 TB disk

Standard_E64_v5
2.6 TB disk

Standard_E64_v5
2.8 TB disk

Standard_E64_v5
3.0 TB disk

Standard_E64_v5
3.2 TB disk

Standard_E64_v5
3.4 TB disk

Standard_E64_v5
3.6 TB disk

Standard_E64_v5
3.6 TB disk

Note

To learn more about ML Engine licensing options and deployment procedures, contact Plixer Technical Support.