Differences between revisions 3 and 4
Revision 3 as of 2007-10-03 16:11:28
Size: 2112
Comment:
Revision 4 as of 2008-11-14 13:41:57
Size: 2122
Editor: anonymous
Comment: converted to 1.6 markup
Deletions are marked like this. Additions are marked like this.
Line 2: Line 2:
[http://perso.ens-lyon.fr/damien.stehle/english.html Damien Stehle]'s fpLLL code is wrapped at [http://trac.sagemath.org/sage_trac/ticket/723 #723] or [http://sage.math.washington.edu/home/malb/fplll.patch fplll.patch] respectively. For some problems, this gives quite good performance already: [[http://perso.ens-lyon.fr/damien.stehle/english.html|Damien Stehle]]'s fpLLL code is wrapped at [[http://trac.sagemath.org/sage_trac/ticket/723|#723]] or [[http://sage.math.washington.edu/home/malb/fplll.patch|fplll.patch]] respectively. For some problems, this gives quite good performance already:
Line 27: Line 27:
However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's [http://perso.ens-lyon.fr/damien.stehle/FPLLL_SURVEY.html paper] and timings can be found in some of his [http://magma.maths.usyd.edu.au/Magma2006/talks/stehle.pdf slides]. For those examples SAGE seems to perform quite poorly. However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's [[http://perso.ens-lyon.fr/damien.stehle/FPLLL_SURVEY.html|paper]] and timings can be found in some of his [[http://magma.maths.usyd.edu.au/Magma2006/talks/stehle.pdf|slides]]. For those examples SAGE seems to perform quite poorly.

The Road to LLL in SAGE

Damien Stehle's fpLLL code is wrapped at #723 or fplll.patch respectively. For some problems, this gives quite good performance already:

   1 sage: B = MatrixSpace(IntegerRing(), 50, 51)(0)
   2 sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
   3 ....:
   4 sage: for i in range(50): B[i,i+1] = 1
   5 ....:
   6 sage: time C = B.LLL('fpLLL:fast')
   7 CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s
   8 Wall time: 9.56
   9 
  10 sage: C.is_LLL_reduced()
  11 True
  12 
  13 sage: BM = B._magma_()
  14 sage: time CM = BM.LLL()
  15 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  16 Wall time: 15.20
  17 
  18 sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name())
  19 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  20 Wall time: 11.68

However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's paper and timings can be found in some of his slides. For those examples SAGE seems to perform quite poorly.

LLL Heuristic

To develop a simple heuristic how to choose a LLL implementation, we thought about using the following benchmarking examples. All these examples are generated using Stehle's generate.c} code and follow his slides for dimensions and bitsizes.

  • 1000 dimensional matrices filled uniformly random with integers of 10, 100, or 1000 bits respectively.
  • matrices as they occur for the Knapsack problem with (dimension,bitsize) pairs of (10, 100000), (100,10000), (150,5000)
  • matrices as they appear for solving simultaneous Diophantine equations of (dim,bits) pairs (3, 128), (12, 10000), (76, 5000)
  • Ajtai (d, bits) (10, 7), (2, 13), (3, 11)
  • particular bad matrices with entries sized at 64, 128, and 500 bits.
  • NTRU (dim, bits, q) (10,100,12), (100,100,35), (5,1000,75)

days5/proj/lll (last edited 2008-11-14 13:41:57 by anonymous)