The Road to LLL in SAGE

Damien Stehle's fpLLL code is wrapped at #723 or fplll.patch respectively. For some problems, this gives quite good performance already:

   1 sage: B = MatrixSpace(IntegerRing(), 50, 51)(0)
   2 sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
   3 ....:
   4 sage: for i in range(50): B[i,i+1] = 1
   5 ....:
   6 sage: time C = B.LLL('fpLLL:fast')
   7 CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s
   8 Wall time: 9.56
   9 
  10 sage: C.is_LLL_reduced()
  11 True
  12 
  13 sage: BM = B._magma_()
  14 sage: time CM = BM.LLL()
  15 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  16 Wall time: 15.20
  17 
  18 sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name())
  19 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  20 Wall time: 11.68

However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's paper and timings can be found in some of his slides. For those examples SAGE seems to perform quite poorly.

LLL Heuristic

To develop a simple heuristic how to choose a LLL implementation, we thought about using the following benchmarking examples. All these examples are generated using Stehle's generate.c} code and follow his slides for dimensions and bitsizes.

days5/proj/lll (last edited 2008-11-14 13:41:57 by anonymous)