Quantum Algorithms for Least Squares Regression

 
Yang Liu       Shengyu Zhang
 
The Chinese University of Hong Kong
 
Fast quantum algorithms
for 
Least Squares Regression
and 
Statistic Leverage Scores
 
 
Part I. Linear regression
Output a “quantum sketch” of solution.
 
Part II. Computing 
leverage scores
 and
matrix coherence
.
Output the target numbers.
 
Part I: Linear regression
Closed-form solution
Relaxations
*1. K. 
Clarkson, D. Woodruff
. 
STOC
, 2013.
*2. 
J. Nelson, H. Nguyen. 
FOCS
, 2013.
Quantum sketch
 
*1. 
*1. 
A. Harrow, A. Hassidim, S. Lloyd, 
PRL
, 2009.
Controversy
 
Useless?
 Can’t read out each solution
variable 
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
’s.
Useful?
 As intermediate steps, e.g. when
some global info of 
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
 is needed.
𝑐,
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
 can be obtained from 
   𝑥 ∗   
  𝑥 ∗  
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
  𝑥 ∗  
   𝑥 ∗   
 by SWAP
test.
Classically also 
𝑝𝑜𝑙𝑦
 log 𝑛 
log
 log 𝑛 
 log 𝑛 
? Impossible
unless 
𝐏
=
𝐁𝐐𝐏
.
LSR results
 
Back to overdetermined system: 
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
=
 𝐴 + 
 𝐴 + 
 𝐴 + 
𝑏
.
[WBL12]*
1
: Output 
   𝑥 ∗   
  𝑥 ∗  
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
  𝑥 ∗  
   𝑥 ∗   
 𝑖   𝑥 𝑖 ∗  𝑖  
 𝑖   𝑥 𝑖 ∗  𝑖  
 𝑖   𝑥 𝑖 ∗  𝑖  
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
 𝑥 𝑖 ∗ 
 𝑖 
 𝑖 
 𝑖   𝑥 𝑖 ∗  𝑖  
 in time
𝑂
  log  𝑛+𝑝   𝑠 3  𝜅 6  
 log  𝑛+𝑝  
log
 log  𝑛+𝑝  
 𝑛+𝑝 
 𝑛+𝑝 
 log  𝑛+𝑝  
 𝑠 3 
 𝑠 3 
 𝑠 3 
 𝜅 6 
 𝜅 6 
 𝜅 6 
  log  𝑛+𝑝   𝑠 3  𝜅 6  
.
Ours:
Same approx. in time 
𝑂
  log  𝑛+𝑝   𝑠 2  𝜅 3  
 log  𝑛+𝑝  
log
 log  𝑛+𝑝  
 𝑛+𝑝 
 𝑛+𝑝 
 log  𝑛+𝑝  
 𝑠 2 
 𝑠 2 
 𝑠 2 
 𝜅 3 
 𝜅 3 
 𝜅 3 
  log  𝑛+𝑝   𝑠 2  𝜅 3
Simpler
 algorithm.
Can also 
estimate 
   𝑥 ∗   2 2 
  𝑥 ∗  
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
  𝑥 ∗  
   𝑥 ∗   2 2 
   𝑥 ∗   2 2 
   𝑥 ∗   2 2 
, which is used for, e.g.
computing 
〈𝑐,
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
.
Extensions
: Ridge Regression, Truncated SVD
*1. 
*1. 
N. Wiebe, D. Braun, S. Lloyd, 
PRL
, 2012.
Our algorithm for LSR
 
Input
: Hermition 
𝐴∈
 ℝ 𝑛×𝑛 
 ℝ 𝑛×𝑛 
 ℝ 𝑛×𝑛 
,
 
 𝑏∈
 ℝ 𝑛 
 ℝ 𝑛 
 ℝ 𝑛 
. Assume
𝐴=
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝜆 𝑖 
 𝜆 𝑖 
 𝜆 𝑖 
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 with 
1
 𝜆 1 
 𝜆 1 
 𝜆 1 
≥…≥
 𝜆 𝑟 
 𝜆 𝑟 
 𝜆 𝑟 
 1 𝜅 
 1 𝜅 
 1 𝜅 
,
and the rest 
 𝜆 𝑖 
 𝜆 𝑖 
 𝜆 𝑖 
’s are 0.
Non-Hermition reduces to Hermition.
Output
: 
 𝜙 
 𝜙 
∼|
 𝑥 
 𝑥 
 w/ 
 𝑥 
 𝑥 
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
, and 
ℓ≈
   𝑥 ∗   2 2 
  𝑥 ∗  
 𝑥 ∗ 
 𝑥 ∗ 
 𝑥 ∗ 
  𝑥 ∗  
   𝑥 ∗   2 2 
   𝑥 ∗   2 2 
   𝑥 ∗   2 2 
.
Note: Write 
𝑏
 as 
𝑏
=
 𝑖   𝛽 𝑖   𝑣 𝑖   
 𝑖   𝛽 𝑖   𝑣 𝑖   
 𝑖   𝛽 𝑖   𝑣 𝑖   
 𝛽 𝑖 
 𝛽 𝑖 
 𝛽 𝑖 
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖   𝛽 𝑖   𝑣 𝑖   
, then the
desirable output is 
 𝐴 + 
 𝐴 + 
 𝐴 + 
𝑏
=
 𝑖∈[𝑟]    𝛽 𝑖   𝜆 𝑖    𝑣 𝑖   
 𝑖∈[𝑟]    𝛽 𝑖   𝜆 𝑖    𝑣 𝑖   
 𝑖∈[𝑟]    𝛽 𝑖   𝜆 𝑖    𝑣 𝑖   
  𝛽 𝑖   𝜆 𝑖  
 𝛽 𝑖 
 𝛽 𝑖 
 𝛽 𝑖 
  𝛽 𝑖   𝜆 𝑖  
 𝜆 𝑖 
 𝜆 𝑖 
 𝜆 𝑖 
  𝛽 𝑖   𝜆 𝑖  
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖∈[𝑟]    𝛽 𝑖   𝜆 𝑖    𝑣 𝑖   
.
Algorithm
Tool: 
Phase Estimation
 quantum algorithm.
Output eigenvalue
 for a given eigenvector.
 
Extension 1: Ridge regression
 
Extension 2: Truncated SVD
Part II. statistic leverage scores
 
*1. 
*1. 
M. Mahoney, Randomized Algorithms for Matrices and Data, Foundations
M. Mahoney, Randomized Algorithms for Matrices and Data, Foundations
& Trends in Machine Learning, 2010.
& Trends in Machine Learning, 2010.
Computing leverage scores
*1. P. Drineas, M. Magdon-Ismail, M. Mahoney, D. Woodruff. 
J. MLR
, 2012.
 
Algorithm for LSR
 
Input: rank-
𝑟
 Hermition 
𝐴∈
 ℝ 𝑛×𝑛 
 ℝ 𝑛×𝑛 
 ℝ 𝑛×𝑛 
, 
𝑏∈
 ℝ 𝑛 
 ℝ 𝑛 
 ℝ 𝑛 
,   
𝑖∈[𝑟]
.
𝐴=
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 𝜆 𝑖 
 𝜆 𝑖 
 𝜆 𝑖 
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖=1 𝑛  𝜆 𝑖   𝑣 𝑖    𝑣 𝑖   
 with 
1
 𝜆 1 
 𝜆 1 
 𝜆 1 
≥…≥
 𝜆 𝑟 
 𝜆 𝑟 
 𝜆 𝑟 
 1 𝜅 
 1 𝜅 
 1 𝜅 
,  
 𝜆 𝑟+1 
 𝜆 𝑟+1 
 𝜆 𝑟+1 
=…=
 𝜆 𝑛 
 𝜆 𝑛 
 𝜆 𝑛 
=0
Output: 
  ℓ 𝑖  
 ℓ 𝑖 
 ℓ 𝑖 
 ℓ 𝑖 
  ℓ 𝑖  
 ℓ 𝑖 
 ℓ 𝑖 
 ℓ 𝑖 
(𝐴)
.
Key Lemma: If 
  𝑒 𝑘  
 𝑒 𝑘 
 𝑒 𝑘 
 𝑒 𝑘 
  𝑒 𝑘  
=
 𝑖∈[𝑛]   𝛽 𝑖   𝑣 𝑖   
𝑖
 𝑖∈[𝑛]   𝛽 𝑖   𝑣 𝑖   
 𝑖∈[𝑛]   𝛽 𝑖   𝑣 𝑖   
 𝛽 𝑖 
 𝛽 𝑖 
 𝛽 𝑖 
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖∈[𝑛]   𝛽 𝑖   𝑣 𝑖   
, then
   
 ℓ 𝑘 
 ℓ 𝑘 
 ℓ 𝑘 
 
=
   𝑈 𝑘   2 2 
  𝑈 𝑘  
 𝑈 𝑘 
 𝑈 𝑘 
 𝑈 𝑘 
  𝑈 𝑘  
   𝑈 𝑘   2 2 
   𝑈 𝑘   2 2 
   𝑈 𝑘   2 2 
=
  𝑒 𝑘  𝑈 𝑈 𝑇   𝑒 𝑘  
 𝑒 𝑘 
 𝑒 𝑘 
 𝑒 𝑘 
 𝑈 𝑈 𝑇  
𝑈
 𝑈 𝑇 
 𝑈 𝑇 
 𝑈 𝑇 
 𝑈 𝑈 𝑇  
 𝑒 𝑘 
 𝑒 𝑘 
 𝑒 𝑘 
  𝑒 𝑘  𝑈 𝑈 𝑇   𝑒 𝑘
     
 
=
  𝑒 𝑘  𝐴 𝐴 +   𝑒 𝑘  
 𝑒 𝑘 
 𝑒 𝑘 
 𝑒 𝑘 
 𝐴 𝐴 +  
𝐴
 𝐴 + 
 𝐴 + 
 𝐴 + 
 𝐴 𝐴 +  
 𝑒 𝑘 
 𝑒 𝑘 
 𝑒 𝑘 
  𝑒 𝑘  𝐴 𝐴 +   𝑒 𝑘  
   
// 
𝐴
 𝐴 + 
 𝐴 + 
 𝐴 + 
=
𝑈𝐷
 𝑉 𝑇 
 𝑉 𝑇 
 𝑉 𝑇 
𝑉
 𝐷 −1 
 𝐷 −1 
 𝐷 −1 
 𝑈 𝑇 
 𝑈 𝑇 
 𝑈 𝑇 
=𝑈
 𝑈 𝑇 
 𝑈 𝑇 
 𝑈 𝑇
     
 
=
  𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖    
 𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖   
 𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖   
 𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖   
 𝛽 𝑖 
 𝛽 𝑖 
 𝛽 𝑖 
  𝑣 𝑖  
 𝑣 𝑖 
 𝑣 𝑖 
 𝑣 𝑖 
  𝑣 𝑖  
 𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖   
  𝑖=1 𝑛  𝛽 𝑖   𝑣 𝑖    
  𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗    
 𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗   
𝑗
 𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗   
 𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗   
 𝜆 𝑗 
 𝜆 𝑗 
 𝜆 𝑗 
  𝑣 𝑗  
 𝑣 𝑗 
 𝑣 𝑗 
 𝑣 𝑗 
  𝑣 𝑗  
  𝑣 𝑗  
 𝑣 𝑗 
 𝑣 𝑗 
 𝑣 𝑗 
  𝑣 𝑗  
 𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗   
  𝑗=1 𝑟  𝜆 𝑗   𝑣 𝑗    𝑣 𝑗
 
      
  𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘    
 𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘   
𝑘
 𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘   
 𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘   
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
  𝑣 𝑘  
 𝑣 𝑘 
 𝑣 𝑘 
 𝑣 𝑘 
  𝑣 𝑘  
  𝑣 𝑘  
 𝑣 𝑘 
 𝑣 𝑘 
 𝑣 𝑘 
  𝑣 𝑘  
 𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘   
  𝑘=1 𝑟  𝜆 𝑘 −1   𝑣 𝑘    𝑣 𝑘    
  𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙    
 𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙   
𝑙
 𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙   
 𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙   
 𝛽 𝑙 
 𝛽 𝑙 
 𝛽 𝑙 
  𝑣 𝑙  
 𝑣 𝑙 
 𝑣 𝑙 
 𝑣 𝑙 
  𝑣 𝑙  
 𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙   
  𝑙=1 𝑛  𝛽 𝑙   𝑣 𝑙
 
=
 𝑖𝑗𝑘𝑙   𝛽 𝑖  
 𝑖𝑗𝑘𝑙   𝛽 𝑖  
 𝑖𝑗𝑘𝑙   𝛽 𝑖  
 𝛽 𝑖 
 𝛽 𝑖 
 𝛽 𝑖 
 𝑖𝑗𝑘𝑙   𝛽 𝑖  
 𝜆 𝑗 
 𝜆 𝑗 
 𝜆 𝑗 
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
 𝜆 𝑘 −1 
 𝛽 𝑙 
 𝛽 𝑙 
 𝛽 𝑙 
 𝛿 𝑖𝑗 
 𝛿 𝑖𝑗 
 𝛿 𝑖𝑗 
 𝛿 𝑗𝑘 
 𝛿 𝑗𝑘 
 𝛿 𝑗𝑘 
 𝛿 𝑘𝑙 
 𝛿 𝑘𝑙 
 𝛿 𝑘𝑙
      
 
=
 𝑖=1 𝑟  𝛽 𝑖 2  
 𝑖=1 𝑟  𝛽 𝑖 2  
 𝑖=1 𝑟  𝛽 𝑖 2  
 𝛽 𝑖 2 
 𝛽 𝑖 2 
 𝛽 𝑖 2 
 𝛽 𝑖 2 
 𝑖=1 𝑟  𝛽 𝑖 2
 
 
Algorithm
 
Summary
 
We give efficient quantum algorithms for
two canonical problems on sparse inputs
Least squares regression
Statistical leverage score
The problems are linear algebraic, not
group/number/polynomial theoretic
 
Slide Note
Embed
Share

Quantum computing presents fast algorithms for solving least squares regression problems efficiently, offering solutions for overdetermined linear systems, matrix coherence, and regression computations. These algorithms leverage quantum mechanics to achieve computational speed-ups and approximate solutions, addressing challenges associated with classical methods.

  • Quantum Computing
  • Least Squares Regression
  • Linear Systems
  • Quantum Algorithms
  • Computational Speed-up

Uploaded on Sep 13, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Fast quantum algorithms for Least Squares Regression and Statistic Leverage Scores Yang Liu Shengyu Zhang The Chinese University of Hong Kong

  2. Part I. Linear regression Output a quantum sketch of solution. Part II. Computing leverage scores and matrix coherence. Output the target numbers.

  3. Part I: Linear regression Solve overdetermined linear system ?? = ?, where ? ? ?, ? ?,? ?, ? ?. Goal: compute min ? Least Square Regression (LSR) 2. ?? ?2

  4. Closed-form solution Closed-form solution known: ? = ?+? = ??? 1??, ?+: Moore-Penrose pseudo-inverse of ?. If the SVD of ? is ?? ?= ?? ??? ??? ? ? = ????(?), then ?+= ?? 1??. Classical complexity: ? ?2? + ??2 Prohibitively slow for big matrices ?. ? where

  5. Relaxations Relaxation: Approximate: output ? ? . Important special case: Sparse and low-rank ?: ? ? + ?? *1,2, where ? = # non-zero entries in each row/column. ? = ????(?) . Quantum speedup? Even writing down the solution ? takes linear time. *1. K. Clarkson, D. Woodruff. STOC, 2013. *2. J. Nelson, H. Nguyen. FOCS, 2013.

  6. Quantum sketch Similar issue as solving linear system ?? = ? for full-rank ? ? ?. Closed-form solution: ? = ? 1?. [HHL09]*1 Output ? ??? ? ?2?2log? Condition number ? = ?1/??, where ?1 ??> 0 are ? s singular values. ?: sparsity. : proportional ? in time *1. A. Harrow, A. Hassidim, S. Lloyd, PRL, 2009.

  7. Controversy Useless? Can t read out each solution variable ?? Useful? As intermediate steps, e.g. when some global info of ? is needed. ?,? can be obtained from ? by SWAP test. Classically also ????log?? Impossible unless ? = ???. s.

  8. LSR results Back to overdetermined system: ? = ?+?. [WBL12]*1: Output ? ??? ? log ? + ? ?3?6. Ours: Same approx. in time ? log ? + ? ?2?3 Simpler algorithm. Can also estimate ? computing ?,? . Extensions: Ridge Regression, Truncated SVD ? in time 2, which is used for, e.g. 2 *1. N. Wiebe, D. Braun, S. Lloyd, PRL, 2012.

  9. Our algorithm for LSR Input: Hermition ? ? ?, ? ?. Assume ? = ?=1 and the rest ?? s are 0. Non-Hermition reduces to Hermition. Output: ? | ? w/ ? ? , and ? Note: Write ? as ? = ?????, then the desirable output is ?+? = ? [?] 1 ?, ???? ?? with 1 ?1 ?? ? 2. 2 ?? ????.

  10. Algorithm Tool: Phase Estimation quantum algorithm. Output eigenvalue for a given eigenvector. ? ? [?]???? ?? ? [?]????| ?? where ?? ?? ? ? ? ????? ?? + ?>????? ?? ? ? 1 2? ??1 + 0 ?? 0 // attach 0 , rotate if ?? 1 2? ?? ???? ?? 1// select 1 component ?? 1 ?? ????, which is just ?+?. ? ?

  11. Extension 1: Ridge regression For ill-conditioned (i.e. large ?) input? Two classical solutions. Ridge regression: min ?? ?2 Closed-form solution: ? = ?????+ ??? Previous algorithms: ? ?2? + ?3, ? ? + ?2? for sparse and low rank. 2+ ? ?2 2. 1? max 1 , ? min1 Ours: ? log ? + ? ?2? 3, for ? = ? , ?.

  12. Extension 2: Truncated SVD 2, where ??= ? with Goal: min ??? ?2 singular values < ? truncated. Ours: ? log ? + ? ?2/min{?3,?2????} ????= ?? ??+1, where ??> ? ??+1.

  13. Part II. statistic leverage scores ? has SVD ?? ?= ?? ??? ??? ? leverage score ?? = ?? 2 ??: the ?-th row of ?. Matrix coherence: max Leverage score ?? measures the importance of row ?. A well-studied measure. Very useful in large scale data analysis, matrix algorithms, outlier detection, low-rank matrix approximation, etc. *1 ?. The ?-th 2. ?(?). ? *1. M. Mahoney, Randomized Algorithms for Matrices and Data, Foundations & Trends in Machine Learning, 2010.

  14. Computing leverage scores Classical algo.*1 finding all ?(?): ?(??). No better algorithm for finding max ?(?) ? Our quantum algorithms for finding each ?(?): ?(log(? + ?)). finding all ?(?): ?(?log(? + ?)). finding max ? ?(?): ?( ?log(? + ?)). *1. P. Drineas, M. Magdon-Ismail, M. Mahoney, D. Woodruff. J. MLR, 2012.

  15. Algorithm for LSR Input: rank-? Hermition ? ? ?, ? ?, ? [?]. ? = ?=1 Output: ? ?(?). Key Lemma: If ?? = ? [?]????, then ?= ?? 2 = ????+??// ??+= ?????? 1??= ??? = ?=1 ???? ?=1 ?=1 ?? = ?????????? = ?=1 ?? ???? ?? with 1 ?1 ?? 1 ? ?, ??+1= = ??= 0 2= ??????? ? ? ???? ?? ? 1?? ?? 1??????????? ? ?=1 ???? ? 2

  16. Algorithm ?? ? [?]???? ?? ? [?]????| ?? where ?? ?? ? ? ? ????? ?? 1 + ?>????? // rotate to 1 if ?? 1/2?. Estimate the prob of observing 1 when measuring the last qubit. ? ??? ?? 0 2= ?, the target. Pr 1

  17. Summary We give efficient quantum algorithms for two canonical problems on sparse inputs Least squares regression Statistical leverage score The problems are linear algebraic, not group/number/polynomial theoretic

Related


More Related Content

giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#giItT1WQy@!-/#