-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BGV/CKKS: support scale management #1459
base: main
Are you sure you want to change the base?
Conversation
ac97379
to
8253471
Compare
It has been quite messy supporting scale, as we have to change these things below
My idea is to skip the LWE type support and use attribute to pass information temporarily, and skip openfhe as it does support that anyway. The pipeline works now for Lattigo, see example func.func @cross_level_add(%base: tensor<4xi16> {secret.secret}, %add: tensor<4xi16> {secret.secret}) -> tensor<4xi16> {
// increase one level
%mul1 = arith.muli %base, %add : tensor<4xi16>
// cross level add
%base1 = arith.addi %mul1, %add : tensor<4xi16>
return %base1 : tensor<4xi16>
} After properly managed and calculation of scale we get %1 = mgmt.modreduce %input0 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
%2 = mgmt.modreduce %input1 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 4>} : tensor<4xi16>
%3 = arith.muli %1, %2 {mgmt.mgmt = #mgmt.mgmt<level = 1, dimension = 3, scale = 16>} : tensor<4xi16>
%4 = mgmt.relinearize %3 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
// need to adjust the scale by mul_const delta_scale
%5 = mgmt.adjust_scale %input1 {delta_scale = 4 : i64, mgmt.mgmt = #mgmt.mgmt<level = 2, scale = 4>, scale = 4 : i64} : tensor<4xi16>
%6 = mgmt.modreduce %5 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
%7 = arith.addi %4, %6 {mgmt.mgmt = #mgmt.mgmt<level = 1, scale = 16>} : tensor<4xi16>
%8 = mgmt.modreduce %7 {mgmt.mgmt = #mgmt.mgmt<level = 0, scale = 65505>} : tensor<4xi16>
%cst = arith.constant dense<1> : tensor<4xi16>
%pt = lwe.rlwe_encode %cst {encoding = #full_crt_packing_encoding, lwe.scale = 4 : i64, ring = #ring_Z65537_i64_1_x4_} : tensor<4xi16> -> !pt
%ct_5 = bgv.mul_plain %ct_0, %pt : (!ct_L2_, !pt) -> !ct_L2_ When emitted to lattigo with debug handler, we can observe the scale change exactly the same
|
Talking about this in office hours. Some ideas:
|
dc5374d
to
b7e6abf
Compare
This reverts commit 1a8afa5.
50174dd
to
180b6c9
Compare
Until now 99 files changed...would be insane if more changes are introduced. Ask for review now because many technical changes need discussion/decision. Doc/cleanup are not done yet. Loop problemThe hard part of supporting scale management is that, making scale match everywhere. The current state of the PR will break the loop support. The intrinsic problem with loop support is that, we need to make it FHE-aware enough. This is the same problem as LevelAnalysis in #1181, where we want to know some invariant kept by the loop. We used to think about keeping level/dimension the same, now we need to consider more to make the scale the same. The following example shows the current matmul code can not live through the scale analysis affine.for %result_tensor (assume 2^45 scale initially)
%a, %const scale 2^45,
%0 = mul_plain %a, %const // result scale 2^90
%1 = add %0, result // scale 2^90
tensor.insert %1 into %result_tensor // scale mismatch! This centainly need some insight into the loop. We can not even deal with unrolled version because we need some kind of back-propagation: %result = tensor.empty // how do we know it when we encounter it
tensor.insert %sth into %result // until now do we know Current status
Problem with backend
|
See #1169
I am afraid we can only do this for Lattigo backend, as Openfhe does not have explicit port for setting scale. Although the policy implemented is in line with Openfhe's implementation, and Openfhe does that automatically.
The detailed rational/impl of the scale management should be put in design doc within this PR.
There are a few changes to support scale
mgmt.level_reduce
,mgmt.adjust_scale
op to support corresponding operationsecret-insert-mgmt-bgv
to use these ops to handle cross level op, whereadjust_scale
is a place holder--validate-noise
will generate parameters aware of these management op--populate-scale
(better name wanted) to concretely fill the scale based on the parameterTODO
include-first-mul
option.LWE
dialect scale support--mlir-to-<scheme>
#1420Cc @AlexanderViand-Intel: A comment on #1295 (comment) is that, the two backends we have can safely
Add(ct0, ct1)
with ciphertexts of different scale as internally when they find scale mismatching they would just adjust scale themselves. So the mixed-degree option foroptimize-relinearization
can be on without affecting correctness, though the noise is different. The merging of this PR does not fix the scale mismatching problem possibly induced by optimize-relinearization for our current two backends, but it indeed pave the way for our ownpoly
backend which must be scale aware.Example
the input mlir
After
secret-insert-mgmt-bgv
, we getwhere adjust_scale has no concrete scale parameter
After --validate-noise and
--populate-scale
, we will get the per-level scale, and the value to fill for eachadjust_scale
Where the first three lines are purely calculated from bgv scheme parameter and the later is the analysis to validate whether the scale matches.
The initial scaling factor is chosen to be 1 for both
include-first-mul={true,false}
, as forinclude-first-mul=false
, the scaling factor of the last level must be the same, so we have1 * 1 = 1
.