Differences between revisions 1 and 2
Revision 1 as of 2007-09-30 13:20:29
Size: 57
Comment:
Revision 2 as of 2007-10-03 15:59:24
Size: 1257
Comment: status update
Deletions are marked like this. Additions are marked like this.
Line 1: Line 1:
See http://perso.ens-lyon.fr/damien.stehle/english.html [http://perso.ens-lyon.fr/damien.stehle/english.html Damien Stehle]'s fpLLL code is wrapped at [http://trac.sagemath.org/sage_trac/ticket/723 #723] or [http://sage.math.washington.edu/home/malb/fplll.patch fplll.patch] respectively. For some problems, this gives quite good performance already:

{{{#!python
sage: B = MatrixSpace(IntegerRing(), 50, 51)(0)
sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
....:
sage: for i in range(50): B[i,i+1] = 1
....:
sage: time C = B.LLL('fpLLL:fast')
CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s
Wall time: 9.56

sage: C.is_LLL_reduced()
True

sage: BM = B._magma_()
sage: time CM = BM.LLL()
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 15.20

sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name())
CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
Wall time: 11.68
}}}

However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's [http://perso.ens-lyon.fr/damien.stehle/FPLLL_SURVEY.html paper] and timings can be found in some of his [http://magma.maths.usyd.edu.au/Magma2006/talks/stehle.pdf slides]. For those examples SAGE seems to perform quite poorly.

[http://perso.ens-lyon.fr/damien.stehle/english.html Damien Stehle]'s fpLLL code is wrapped at [http://trac.sagemath.org/sage_trac/ticket/723 #723] or [http://sage.math.washington.edu/home/malb/fplll.patch fplll.patch] respectively. For some problems, this gives quite good performance already:

   1 sage: B = MatrixSpace(IntegerRing(), 50, 51)(0)
   2 sage: for i in range(50): B[i,0] = ZZ.random_element(2**10000)
   3 ....:
   4 sage: for i in range(50): B[i,i+1] = 1
   5 ....:
   6 sage: time C = B.LLL('fpLLL:fast')
   7 CPU times: user 9.54 s, sys: 0.00 s, total: 9.54 s
   8 Wall time: 9.56
   9 
  10 sage: C.is_LLL_reduced()
  11 True
  12 
  13 sage: BM = B._magma_()
  14 sage: time CM = BM.LLL()
  15 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  16 Wall time: 15.20
  17 
  18 sage: time magma.eval("CM := LLL(%s:Fast:=1)"%BM.name())
  19 CPU times: user 0.00 s, sys: 0.00 s, total: 0.00 s
  20 Wall time: 11.68

However, the main strength of MAGMA's LLL is that it chooses a reasonably 'good' LLL implementation automatically. This is described in Damien Stehle's [http://perso.ens-lyon.fr/damien.stehle/FPLLL_SURVEY.html paper] and timings can be found in some of his [http://magma.maths.usyd.edu.au/Magma2006/talks/stehle.pdf slides]. For those examples SAGE seems to perform quite poorly.

days5/proj/lll (last edited 2008-11-14 13:41:57 by anonymous)