The current successes in AI are sometimes attributed to the emergence and evolutions of the GPU. The GPU’s structure, which usually contains 1000’s of multi-processors, high-speed reminiscence, devoted tensor cores, and extra, is especially well-suited to fulfill the intensive calls for of AI/ML workloads. Sadly, the fast development in AI growth has led to a surge within the demand for GPUs, making them tough to acquire. Because of this, ML builders are more and more exploring different {hardware} choices for coaching and working their fashions. In earlier posts, we mentioned the potential of coaching on devoted AI ASICs corresponding to Google Cloud TPU, Haban Gaudi, and AWS Trainium. Whereas these choices supply vital cost-saving alternatives, they don’t go well with all ML fashions and may, just like the GPU, additionally endure from restricted availability. On this put up we return to the great old school CPU and revisit its relevance to ML functions. Though CPUs are typically much less suited to ML workloads in comparison with GPUs, they’re much simpler to accumulate. The power to run (at the very least a few of) our workloads on CPU may have vital implications on growth productiveness.
In earlier posts (e.g., right here) we emphasised the significance of analyzing and optimizing the runtime efficiency of AI/ML workloads as a way of accelerating growth and minimizing prices. Whereas that is essential whatever the compute engine used, the profiling instruments and optimization methods can fluctuate vastly between platforms. On this put up, we’ll talk about a number of the efficiency optimization choices that pertain to CPU. Our focus will likely be on Intel® Xeon® CPU processors (with Intel® AVX-512) and on the PyTorch (model 2.4) framework (though comparable methods could be utilized to different CPUs and frameworks, as nicely). Extra particularly, we’ll run our experiments on an Amazon EC2 c7i occasion with an AWS Deep Studying AMI. Please don’t view our selection of Cloud platform, CPU model, ML framework, or some other device or library we should always point out, as an endorsement over their options.
Our objective will likely be to reveal that though ML growth on CPU will not be our first selection, there are methods to “soften the blow” and — in some instances — maybe even make it a viable different.
Disclaimers
Our intention on this put up is to reveal only a few of the ML optimization alternatives obtainable on CPU. Opposite to many of the on-line tutorials on the subject of ML optimization on CPU, we’ll deal with a coaching workload somewhat than an inference workload. There are a selection of optimization instruments targeted particularly on inference that we’ll not cowl (e.g., see right here and right here).
Please don’t view this put up as a substitute of the official documentation on any of the instruments or methods that we point out. Remember the fact that given the fast tempo of AI/ML growth, a number of the content material, libraries, and/or directions that we point out might turn into outdated by the point you learn this. Please make sure you confer with the most-up-to-date documentation obtainable.
Importantly, the impression of the optimizations that we talk about on runtime efficiency is prone to fluctuate vastly primarily based on the mannequin and the main points of the surroundings (e.g., see the excessive diploma of variance between fashions on the official PyTorch TouchInductor CPU Inference Efficiency Dashboard). The comparative efficiency numbers we’ll share are particular to the toy mannequin and runtime surroundings that we’ll use. You should definitely reevaluate the entire proposed optimizations by yourself mannequin and runtime surroundings.
Lastly, our focus will likely be solely on throughput efficiency (as measured in samples per second) — not on coaching convergence. Nevertheless, it must be famous that some optimization methods (e.g., batch measurement tuning, combined precision, and extra) may have a damaging impact on the convergence of sure fashions. In some instances, this may be overcome via applicable hyperparameter tuning.
We’ll run our experiments on a easy picture classification mannequin with a ResNet-50 spine (from Deep Residual Studying for Picture Recognition). We’ll prepare the mannequin on a faux dataset. The complete coaching script seems within the code block beneath (loosely primarily based on this instance):
import torch
import torchvision
from torch.utils.knowledge import Dataset, DataLoader
import time# A dataset with random photographs and labels
class FakeDataset(Dataset):
def __len__(self):
return 1000000
def __getitem__(self, index):
rand_image = torch.randn([3, 224, 224], dtype=torch.float32)
label = torch.tensor(knowledge=index % 10, dtype=torch.uint8)
return rand_image, label
train_set = FakeDataset()
batch_size=128
num_workers=0
train_loader = DataLoader(
dataset=train_set,
batch_size=batch_size,
num_workers=num_workers
)
mannequin = torchvision.fashions.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mannequin.parameters())
mannequin.prepare()
t0 = time.perf_counter()
summ = 0
depend = 0
for idx, (knowledge, goal) in enumerate(train_loader):
optimizer.zero_grad()
output = mannequin(knowledge)
loss = criterion(output, goal)
loss.backward()
optimizer.step()
batch_time = time.perf_counter() - t0
if idx > 10: # skip first steps
summ += batch_time
depend += 1
t0 = time.perf_counter()
if idx > 100:
break
print(f'common step time: {summ/depend}')
print(f'throughput: {depend*batch_size/summ}')
Operating this script on a c7i.2xlarge (with 8 vCPUs) and the CPU model of PyTorch 2.4, ends in a throughput of 9.12 samples per second. For the sake of comparability, we be aware that the throughput of the identical (unoptimized script) on an Amazon EC2 g5.2xlarge occasion (with 1 GPU and eight vCPUs) is 340 samples per second. Considering the comparative prices of those two occasion sorts ($0.357 per hour for a c7i.2xlarge and $1.212 for a g5.2xlarge, as of the time of this writing), we discover that coaching on the GPU occasion to present roughly eleven(!!) occasions higher worth efficiency. Based mostly on these outcomes, the choice for utilizing GPUs to coach ML fashions could be very nicely based. Let’s assess a number of the potentialities for decreasing this hole.
On this part we’ll discover some fundamental strategies for rising the runtime efficiency of our coaching workload. Though you might acknowledge a few of these from our put up on GPU optimization, you will need to spotlight a big distinction between coaching optimization on CPU and GPU platforms. On GPU platforms a lot of our effort was devoted to maximizing the parallelization between (the coaching knowledge preprocessing on) the CPU and (the mannequin coaching on) the GPU. On CPU platforms the entire processing happens on the CPU and our objective will likely be to allocate its assets most successfully.
Batch Dimension
Rising the coaching batch measurement can probably improve efficiency by decreasing the frequency of the mannequin parameter updates. (On GPUs it has the additional benefit of decreasing the overhead of CPU-GPU transactions corresponding to kernel loading). Nevertheless, whereas on GPU we aimed for a batch measurement that might maximize the utilization of the GPU reminiscence, the identical technique may harm efficiency on CPU. For causes past the scope of this put up, CPU reminiscence is extra difficult and the perfect method for locating probably the most optimum batch measurement could also be via trial and error. Remember the fact that altering the batch measurement may have an effect on coaching convergence.
The desk beneath summarizes the throughput of our coaching workload for a number of (arbitrary) decisions of batch measurement:
Opposite to our findings on GPU, on the c7i.2xlarge occasion kind our mannequin seems to choose decrease batch sizes.
Multi-process Knowledge Loading
A typical approach on GPUs is to assign a number of processes to the information loader in order to scale back the probability of hunger of the GPU. On GPU platforms, a normal rule of thumb is to set the variety of staff in line with the variety of CPU cores. Nevertheless, on CPU platforms, the place the mannequin coaching makes use of the identical assets as the information loader, this method may backfire. As soon as once more, the perfect method for selecting the optimum variety of staff could also be trial and error. The desk beneath exhibits the typical throughput for various decisions of num_workers:
Blended Precision
One other common approach is to make use of decrease precision floating level datatypes corresponding to torch.float16
or torch.bfloat16
with the dynamic vary of torch.bfloat16
typically thought-about to be extra amiable to ML coaching. Naturally, decreasing the datatype precision can have antagonistic results on convergence and must be carried out fastidiously. PyTorch comes with torch.amp, an automated combined precision package deal for optimizing using these datatypes. Intel® AVX-512 contains assist for the bfloat16 datatype. The modified coaching step seems beneath:
for idx, (knowledge, goal) in enumerate(train_loader):
optimizer.zero_grad()
with torch.amp.autocast('cpu',dtype=torch.bfloat16):
output = mannequin(knowledge)
loss = criterion(output, goal)
loss.backward()
optimizer.step()
The throughput following this optimization is 24.34 samples per second, a rise of 86%!!
Channels Final Reminiscence Format
Channels final reminiscence format is a beta-level optimization (on the time of this writing), pertaining primarily to imaginative and prescient fashions, that helps storing 4 dimensional (NCHW) tensors in reminiscence such that the channels are the final dimension. This ends in the entire knowledge of every pixel being saved collectively. This optimization pertains primarily to imaginative and prescient fashions. Thought-about to be extra “pleasant to Intel platforms”, this reminiscence format is reported enhance the efficiency of a ResNet-50 on an Intel® Xeon® CPU. The adjusted coaching step seems beneath:
for idx, (knowledge, goal) in enumerate(train_loader):
knowledge = knowledge.to(memory_format=torch.channels_last)
optimizer.zero_grad()
with torch.amp.autocast('cpu',dtype=torch.bfloat16):
output = mannequin(knowledge)
loss = criterion(output, goal)
loss.backward()
optimizer.step()
The ensuing throughput is 37.93 samples per second — an extra 56% enchancment and a complete of 415% in comparison with our baseline experiment. We’re on a job!!
Torch Compilation
In a earlier put up we lined the virtues of PyTorch’s assist for graph compilation and its potential impression on runtime efficiency. Opposite to the default keen execution mode by which every operation is run independently (a.okay.a., “eagerly”), the compile API converts the mannequin into an intermediate computation graph which is then JIT-compiled into low-level machine code in a fashion that’s optimum for the underlying coaching engine. The API helps compilation by way of completely different backend libraries and with a number of configuration choices. Right here we’ll restrict our analysis to the default (TorchInductor) backend and the ipex backend from the Intel® Extension for PyTorch, a library with devoted optimizations for Intel {hardware}. Please see the documentation for applicable set up and utilization directions. The up to date mannequin definition seems beneath:
import intel_extension_for_pytorch as ipexmannequin = torchvision.fashions.resnet50()
backend='inductor' # optionally change to 'ipex'
mannequin = torch.compile(mannequin, backend=backend)
Within the case of our toy mannequin, the impression of torch compilation is just obvious when the “channels final” optimization is disabled (and improve of ~27% for every of the backends). When “channels final” is utilized, the efficiency truly drops. Because of this, we drop this optimization from our subsequent experiments.
There are a selection of alternatives for optimizing using the underlying CPU assets. These embody optimizing reminiscence administration and thread allocation to the construction of the underlying CPU {hardware}. Reminiscence administration could be improved via using superior reminiscence allocators (corresponding to Jemalloc and TCMalloc) and/or decreasing reminiscence accesses which might be slower (i.e., throughout NUMA nodes). Threading allocation could be improved via applicable configuration of the OpenMP threading library and/or use of Intel’s Open MP library.
Usually talking, these sorts of optimizations require a deep stage understanding of the CPU structure and the options of its supporting SW stack. To simplify issues, PyTorch provides the torch.backends.xeon.run_cpu script for robotically configuring the reminiscence and threading libraries in order to optimize runtime efficiency. The command beneath will lead to using the devoted reminiscence and threading libraries. We’ll return to the subject of NUMA nodes once we talk about the choice of distributed coaching.
We confirm applicable set up of TCMalloc (conda set up conda-forge::gperftools
) and Intel’s Open MP library (pip set up intel-openmp
), and run the next command.
python -m torch.backends.xeon.run_cpu prepare.py
Using the run_cpu script additional boosts our runtime efficiency to 39.05 samples per second. Observe that the run_cpu script contains many controls for additional tuning efficiency. You should definitely try the documentation with a purpose to maximize its use.
The Intel® Extension for PyTorch contains extra alternatives for coaching optimization by way of its ipex.optimize operate. Right here we reveal its default use. Please see the documentation to be taught of its full capabilities.
mannequin = torchvision.fashions.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mannequin.parameters())
mannequin.prepare()
mannequin, optimizer = ipex.optimize(
mannequin,
optimizer=optimizer,
dtype=torch.bfloat16
)
Mixed with the reminiscence and thread optimizations mentioned above, the resultant throughput is 40.73 samples per second. (Observe {that a} comparable result’s reached when disabling the “channels final” configuration.)
Intel® Xeon® processors are designed with Non-Uniform Reminiscence Entry (NUMA) by which the CPU reminiscence is split into teams, a.okay.a., NUMA nodes, and every of the CPU cores is assigned to at least one node. Though any CPU core can entry the reminiscence of any NUMA node, the entry to its personal node (i.e., its native reminiscence) is far quicker. This offers rise to the notion of distributing coaching throughout NUMA nodes, the place the CPU cores assigned to every NUMA node act as a single course of in a distributed course of group and knowledge distribution throughout nodes is managed by Intel® oneCCL, Intel’s devoted collective communications library.
We are able to run knowledge distributed coaching throughout NUMA nodes simply utilizing the ipexrun utility. Within the following code block (loosely primarily based on this instance) we adapt our script to run knowledge distributed coaching (in line with utilization detailed right here):
import os, time
import torch
from torch.utils.knowledge import Dataset, DataLoader
from torch.utils.knowledge.distributed import DistributedSampler
import torch.distributed as dist
import torchvision
import oneccl_bindings_for_pytorch as torch_ccl
import intel_extension_for_pytorch as ipexos.environ["MASTER_ADDR"] = "127.0.0.1"
os.environ["MASTER_PORT"] = "29500"
os.environ["RANK"] = os.environ.get("PMI_RANK", "0")
os.environ["WORLD_SIZE"] = os.environ.get("PMI_SIZE", "1")
dist.init_process_group(backend="ccl", init_method="env://")
rank = os.environ["RANK"]
world_size = os.environ["WORLD_SIZE"]
batch_size = 128
num_workers = 0
# outline dataset and dataloader
class FakeDataset(Dataset):
def __len__(self):
return 1000000
def __getitem__(self, index):
rand_image = torch.randn([3, 224, 224], dtype=torch.float32)
label = torch.tensor(knowledge=index % 10, dtype=torch.uint8)
return rand_image, label
train_dataset = FakeDataset()
dist_sampler = DistributedSampler(train_dataset)
train_loader = DataLoader(
dataset=train_dataset,
batch_size=batch_size,
num_workers=num_workers,
sampler=dist_sampler
)
# outline mannequin artifacts
mannequin = torchvision.fashions.resnet50()
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mannequin.parameters())
mannequin.prepare()
mannequin, optimizer = ipex.optimize(
mannequin,
optimizer=optimizer,
dtype=torch.bfloat16
)
# configure DDP
mannequin = torch.nn.parallel.DistributedDataParallel(mannequin)
# run coaching loop
# destroy the method group
dist.destroy_process_group()
Sadly, as of the time of this writing, the Amazon EC2 c7i occasion household doesn’t embody a multi-NUMA occasion kind. To check our distributed coaching script, we revert again to a Amazon EC2 c6i.32xlarge occasion with 64 vCPUs and a pair of NUMA nodes. We confirm the set up of Intel® oneCCL Bindings for PyTorch and run the next command (as documented right here):
supply $(python -c "import oneccl_bindings_for_pytorch as torch_ccl;print(torch_ccl.cwd)")/env/setvars.sh# This instance command would make the most of all of the numa sockets of the processor, taking every socket as a rank.
ipexrun cpu --nnodes 1 --omp_runtime intel prepare.py
The next desk compares the efficiency outcomes on the c6i.32xlarge occasion with and with out distributed coaching:
In our experiment, knowledge distribution did not enhance the runtime efficiency. Please see ipexrun documentation for extra efficiency tuning choices.
In earlier posts (e.g., right here) we mentioned the PyTorch/XLA library and its use of XLA compilation to allow PyTorch primarily based coaching on XLA gadgets corresponding to TPU, GPU, and CPU. Just like torch compilation, XLA makes use of graph compilation to generate machine code that’s optimized for the goal system. With the institution of the OpenXLA Undertaking, one of many acknowledged objectives was to assist excessive efficiency throughout all {hardware} backends, together with CPU (see the CPU RFC right here). The code block beneath demonstrates the changes to our unique (unoptimized) script required to coach utilizing PyTorch/XLA:
import torch
import torchvision
import timeimport torch_xla
import torch_xla.core.xla_model as xmsystem = xm.xla_device()
mannequin = torchvision.fashions.resnet50().to(system)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(mannequin.parameters())
mannequin.prepare()
for idx, (knowledge, goal) in enumerate(train_loader):
knowledge = knowledge.to(system)
goal = goal.to(system)
optimizer.zero_grad()
output = mannequin(knowledge)
loss = criterion(output, goal)
loss.backward()
optimizer.step()
xm.mark_step()
Sadly, (as of the time of this writing) the XLA outcomes on our toy mannequin appear far inferior to the (unoptimized) outcomes we noticed above (— by as a lot as 7X). We count on this to enhance as PyTorch/XLA’s CPU assist matures.
We summarize the outcomes of a subset of our experiments within the desk beneath. For the sake of comparability, we add the throughput of coaching our mannequin on Amazon EC2 g5.2xlarge GPU occasion following the optimization steps mentioned in this put up. The samples per greenback was calculated primarily based on the Amazon EC2 On-demand pricing web page ($0.357 per hour for a c7i.2xlarge and $1.212 for a g5.2xlarge, as of the time of this writing).
Though we succeeded in boosting the coaching efficiency of our toy mannequin on the CPU occasion by a substantial margin (446%), it stays inferior to the (optimized) efficiency on the GPU occasion. Based mostly on our outcomes, coaching on GPU could be ~6.7 occasions cheaper. It’s doubtless that with extra efficiency tuning and/or making use of extra optimizations methods, we may additional shut the hole. As soon as once more, we emphasize that the comparative efficiency outcomes we now have reached are distinctive to this mannequin and runtime surroundings.
Amazon EC2 Spot Situations Reductions
The elevated availability of cloud-based CPU occasion sorts (in comparison with GPU occasion sorts) might suggest better alternative for acquiring compute energy at discounted charges, e.g., via Spot Occasion utilization. Amazon EC2 Spot Situations are cases from surplus cloud service capability which might be provided for a reduction of as a lot as 90% off the On-Demand pricing. In alternate for the discounted worth, AWS maintains the correct to preempt the occasion with little to no warning. Given the excessive demand for GPUs, you might discover CPU spot cases simpler to get ahold of than their GPU counterparts. On the time of this writing, c7i.2xlarge Spot Occasion worth is $0.1291 which might enhance our samples per greenback outcome to 1135.76 and additional reduces the hole between the optimized GPU and CPU worth performances (to 2.43X).
Whereas the runtime efficiency outcomes of the optimized CPU coaching of our toy mannequin (and our chosen surroundings) have been decrease than the GPU outcomes, it’s doubtless that the identical optimization steps utilized to different mannequin architectures (e.g., ones that embody parts that aren’t supported by GPU) might outcome within the CPU efficiency matching or beating that of the GPU. And even in instances the place the efficiency hole shouldn’t be bridged, there might very nicely be instances the place the scarcity of GPU compute capability would justify working a few of our ML workloads on CPU.
Given the ubiquity of the CPU, the power to make use of them successfully for coaching and/or working ML workloads may have large implications on growth productiveness and on end-product deployment technique. Whereas the character of the CPU structure is much less amiable to many ML functions when in comparison with the GPU, there are lots of instruments and methods obtainable for enhancing its efficiency — a choose few of which we now have mentioned and demonstrated on this put up.
On this put up we targeted optimizing coaching on CPU. Please make sure you try our many different posts on medium masking all kinds of matters pertaining to efficiency evaluation and optimization of machine studying workloads.