If your perf is not working…

I was trying to perf LLVM opt this week and got confused because the perf report showed mangled function calls on the stack traces. So I might as well write an article of it and hope it would get some Google juice and save others from confusion.

  • To build LLVM with complete stack traces, build with CMAKE_BUILD_TYPE=Debug
  • To allow perf to work on LLVM, build with “LLVM_USE_PERF:=ON

At the very first place I thought the mangle-ed function call names exist because of perf not recognizing the symbols. It turned out I was wrong. If perf does not recognize the symbol it would simply show Unknown. The mixed-up symbol names are name mangling for function calls. (see more here)

You can remove the mangling simply with llvm-cxxfilt (see more here). For the exact correct mapping you may need to use llvm-cxxmap (see from here).

With the correct build I still get mangled-output. Turned out that the perf I apt-get-ed from is compiled with the de-mangling function turned off. It was a bug filed in linux since 2014 November and I’m still falling into this error on 2021 June 🤢. (on Ubuntu 16.04 linux 4.4.0-131 generic x86_64)

Mangled Output, visualization by FlameGraph

There’s 3 links in the bug thread above that may lead you to the solution. Personally I’ve download the perf tool in mirrors.edge.kernel.org, followed the instructions here to build the correct perf I needed.

Clean Output, visualization by FlameGraph

NOTE: you may need to apt-get some dependencies to enable some feature in perf. Be sure to checkout the compile messages when you build from source. You can checkout features enabled with perf version --build-options.

PS: Just in case you came in with your perf not working on your self-compiled code with clang or gcc, you may want to look at this stack-overflow answer.

Recursive template metaprogramming (Part III)

On the previous part, I went through some practice on abstractions when writing recursions.

This part would be the last part of the current topic. I would implement MergeSort with recursive template metaprogramming.

Continue reading “Recursive template metaprogramming (Part III)”

Integer Quantization for Deep Learning Inference: Principle and Empirical Evaluation

CodiMD – Collaborative markdown notes

論文連結。這是一篇介紹性的 paper,以下是閱讀時的摘要。

1 – Intro

目前在 DL 領域 32 位元浮點數是主要的數值表示方法。


而在實際 model inference 時所使用數值並沒有那麼精準,會使用位元較少的數值表示。主要因為:

  • 可以提高 throughput
  • 降低 memory bandwidth 需求
  • 記憶體用量之後 cache 可以儲存比較多資訊,locality 也會變好

這篇論文主要對 quantization 還有 calibration 介紹其中基礎的數學,對 model 校準的訓練以及在 data set 上實際的校準結果。

Quantization 分為兩大類:

  • Post Training Quantization (PTQ)
  • Quantization Aware Training (QAT)

這篇主要介紹使用 quantization 來加速運算,因此使用 uniform quantization scheme。

3 – Quantization Fundementals

Uniform quantization scheme

Uniform quantization involves 2 steps.

  1. range of real number to be quantized, clamp outliers outside the range
  2. map the real values into representable range

Quantize / Dequantize

  • Quantize: real number to integer representation (fp32 to int8)
  • Dequantize: integer representation to real number (int32 to fp16)

3.1 Range mapping

  • Range of real value: [𝛽,𝛼][β,α]

  • bit-width of representation: 𝑏b

    • signed integer: [2𝑏1,2𝑏11][2b1,2b11]
    • unsigned integer: [0,2𝑏1][0,2b1]
  • Uniform transformation:

    • affine: 𝑓(𝑥)=𝑠𝑥+𝑧f(x)=sx+z, 𝑠,𝑥,𝑧𝑅s,x,zR
    • scale: 𝑓(𝑥)=𝑠𝑥f(x)=sx, 𝑠,𝑥𝑅s,xR

3.1.1 Affine quantization


𝑓(𝑥)=𝑠𝑥+𝑧f(x)=sx+z, 𝑠,𝑥,𝑧𝑅s,x,zR

  • 𝑠=2𝑏1𝛼𝛽s=2b1αβ
  • 𝑧=𝑟𝑜𝑢𝑛𝑑(𝛽𝑠)2𝑏1z=round(βs)2b1

可以想像是給定兩項條件:(1) real value range; (2) representable range 之後,把值直接「線性的」投射上去。


Dequantize 為原本 𝑓(𝑥)f(x) 的反函數:


3.1.2 Scale quantization


Scale quantize 是一種 affine quantization 的特例。也就是它以 0 為基準點,設定 real value range 為 [𝛼,𝛼][α,α]


  • 𝑠=2𝑏1𝛼s=2b1α

𝑠s 的倒數乘回去。



Scale quantization 應該要是 affine quantization 的一個特例。
設定 real value range [𝛼,𝛼][α,α] 的話,


(以上的 scheme 為 ceil 或是 round to the nearest 都有可能)

3.2 Tensor quantization granularity

粒度也是 quantization 的重點之一。

最粗糙的就是 tensor 中每個 element 都共用同樣的 quantization parameter。而最細緻的就是每個 element 都擁有自己的 quantization parameter。


  • per column / per row(適用 2-D tensors, like 2D-CNN activations)
  • per channel(適用 3-D tensors, like image)

考量 quantization 效果因素:

  • accuracy
  • computation cost
Computation cost

Linear layer (fully-connected): 𝑌=𝑋𝑊Y=XW, where

  • 𝑋=(𝑥𝑖𝑘)𝑚×𝑝X=(xik)Rm×p (input tensor)
  • 𝑊=(𝑤𝑘𝑗)𝑝×𝑛W=(wkj)Rp×n (weight tensor)
  • 𝑌=(𝑥𝑖𝑗)𝑚×𝑛Y=(xij)Rm×n (output tensor)

The quantized tensor can be expressed as…

  • 𝑋𝑞=(𝑥𝑞,𝑖𝑘)𝑚×𝑝Xq=(xq,ik)Zm×p (input tensor)
  • 𝑊𝑞=(𝑤𝑞,𝑘𝑗)𝑝×𝑛Wq=(wq,kj)Zp×n (weight tensor)

可以推斷 quantized Y 與原本 Y 的關係:


  • =Σ𝑝𝑘=1𝑑𝑒𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑥𝑖𝑘)𝑑𝑒𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑤𝑘𝑗)=Σk=1pdequantize(xik)dequantize(wkj)
  • =Σ𝑝𝑘=11𝑠𝑥,𝑖𝑘𝑥𝑞,𝑖𝑘1𝑠𝑥,𝑖𝑘𝑤𝑞,𝑘𝑗=Σk=1p1sx,ikxq,ik1sx,ikwq,kj

如果把 scaling factor 乘數提取出來,使得 scale 𝑠s independent of 𝑘k,就可以繼續維持矩陣乘法的形式。這時 quantized 的 𝑥,𝑤x,w 就可以做矩陣乘法,硬體有辦法高效率的執行這種指令。


以上這個推導表示出了:granularity 為 per row / per column 或更粗糙時,可以維持整數的矩陣乘法運算,也代表 inference 時不需要對內部 quantized elememt 做加工。

因此 “quantization per row 或更粗操” 可以保證計算效率。

3.3 Computational cost of affine quantization

Affine quantization 因為計算時需要對內部 activation data 做加工,因此是效率較差的 quantization 方法。不過當然也有更多參數可以控制。

當然也可以推導出原輸出與 quantized 輸出的關係。


  • =Σ𝑝𝑘=1𝑑𝑒𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑥𝑖𝑘)𝑑𝑒𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑤𝑘𝑗)=Σk=1pdequantize(xik)dequantize(wkj)
  • =Σ𝑝𝑘=11𝑠𝑥(𝑥𝑞,𝑖𝑘𝑧𝑥)1𝑠𝑤,𝑗(𝑤𝑞,𝑘𝑗𝑧𝑤,𝑗)=Σk=1p1sx(xq,ikzx)1sw,j(wq,kjzw,j)




  • integer dot product: Σ𝑝𝑘=1𝑥𝑞,𝑖𝑘𝑤𝑞,𝑘𝑗Σk=1pxq,ikwq,kj
  • integer weight and zero point (𝑧z): Σ𝑝𝑘=1(𝑤𝑞,𝑘𝑗𝑧𝑥+𝑧𝑥𝑧𝑤,𝑗)Σk=1p(wq,kjzx+zxzw,j)。其中這裡可以對 weight 做事情,有 bias 的 operator 也可以把 zero point 的 dot product fuse 進去。
  • involves into the input tensor, cannot be computed offline: Σ𝑝𝑘=1𝑥𝑞,𝑖𝑘𝑧𝑤,𝑗Σk=1pxq,ikzw,j。這項 overhead 是因為在 weight tensor 做 affine quantization。如果對 weight 單純做 scale quantization 的話,就不會有這一項,是減少 overhead 的做法。因為有時候這項的 computation cost 甚至會使得硬體帶來的 integer dot product 優化的效能被這項正負相抵消了。

3.4 Calibration

Calibration is the process of choosing the real value range – 𝛼α and 𝛽β, for model weights and activations.

三種常見 Calibration 方法:

  • Max:取極值,最淺顯易懂的作法。
  • Entropy:取 KL Divergence 並盡力取減少 information loss。這是理論上的最佳作法
  • Percentile:百分位式,像是只取分布中主要 99.99% 的 element 作為 𝛼α, 𝛽β range。這種做法法可以避免極值離主要分布太遠。

4 – Post training quantization

由上一章節可以看到。可以對 model 中的 weight 做 calibration。可以丟進一些 input data 來得到 calibration 結果。

Model 有分很多種:CNN feed-forward, RNN, Attention-based NN.

4.1 Weight quantization


quantization 方法:max。

  • Per channel 比 per tensor 好。
  • BN folding 不影響 per channel calibration
  • BN folding 會影響 per tensor calibration

4.2 Activation quantization


  • 大部分為 entropy 最佳
  • max 部分從來不是個好作法
  • Percentile 作法偶爾會贏 entropy。
  • mobilenet, efficientnet, bert 有 > 1% 的 accuracy loss

小結論:no single calibration is best for all networks.

5 – Accuracy recovery

當發現 calibration 真的有帶來 accuracy loss 時,可以使用方法來做 accuracy recovery。

5.1 Partial quantization

往往是因為某些 NN 層帶來 accuracy loss。一個繞過去的作法就是讓 CPU 來做這些造成失真的層。(leaving them unquantized)

如果想要測試 partial quantization 的組合,組合會 exponential 增長,所以用 single layer accuracy 的方法來比較的話,可以將所有 layer 排序,並逐層拔掉 quantization 直到有打到理想的 accuracy。(是個挺直白的 greedy heuristic)

而列出哪個 layer 會影響準度,叫做 sensitivity analysis。

5.2 Quantization aware training (QAT)

Insert quantization before training.

The intuition behind is when we train with quantization we may narrow the gradient descent to the optimal. Make model be aware of integer-ness, and find “wide and flat” minima.

常見作法為 fake quantization,又叫 simulated quantization。

fake quantize: 𝑥̂ =𝑑𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑞𝑢𝑎𝑛𝑡𝑖𝑧𝑒(𝑥,𝑏,𝑠),𝑏,𝑠)x^=dquantize(quantize(x,b,s),b,s)

對無法微分的的地方,使用 Straight Through Estimator (STE)。


  • 如果在 real value range 內,回傳 1
  • else,回傳 0

甚至有時候 QAT 會帶來更好的 model accuracy 因為 quantization 也有 regularizer 的效果。

5.3 Learning quantization parameters

It is also possible to learn quantization parameters along with the model weight.

PACT learns the range of activation for activation quantization during training.

Initialized with max calibration (意思是:「以 max calibration 參數為初始」嗎??)

  • learning the range (real value range) results in better accuracy

Initialized with best calibration range

  • yields similar result as initialized with max calibration

小結論:learning the range (QAT) doesn’t offer additional benefit if given carefully calibrated range.

不過文章中也有說 PACT 可能在其他地方更有用,這裡只是拿 PACT 來展現 QAT 是在做什麼的。

6 – Workflow

Pretrained Network –> PTQ –> Partial Quantization –> QAT(start from best calibration)

Self reflection

終於更了解 quantization / calibration 了!!!