Commit 3a2c0797 authored by Praetorius, Simon's avatar Praetorius, Simon
Browse files

add PETSc find module and configuration to cmake

changed petsc cmake configuration

added computation of global DOF ids for pre-bases

Global basis ids for dune 2.6 and dune 2.7

corrected the unbind function

wrong unbind function call corrected

Added partition type query to GlobalIdSet

added documentation and extended the test

add general implementation and specialization for TaylorHoodBasis

corrected treepath of velocity component in taylorhoodbasis

add PETSc find module and configuration to cmake

add DofMapping for the association of local and global indices

changed petsc cmake configuration

added DOFMapping and test, to be used with petsc or pmtl backend

extension of istl preconditioner and amg precon

added block jacobi preconditioner with sequential sub precon

update ISTLCommTest for new Communication Creators

removed some smoothers due to bugs in dune-istl

added krylov AMG preconditioner

restructuring of istl solvers and preconditioners

Corrected errors in uniqueBorderPartitionTest and ProblemStat

changed default solver and removed solver category argument from precon creator

restructured stolver and precon creators in istl. Added precompiled explicit template instantiations.

cholmod for dune-2.7 only

removed explicit instantiation of AMG precons due to dune-istl bugs

Cleanup of AMG and more documentation, removed some precompiled preconditioner and solvers

redesign of linear algebra backend for inclusion of PETSc

petsc example added for study of the implementation

Matrix pattern communicates non-owning rows

petsc example extended for debugging

output of vectors does not use std::vector directly anymore

dirichletBC redesigned

restructuring of vectors and matrices, added BiLinearForm and LinearForm

cleanup of VectorBackend design. Needs to be applied to all backends!

adopt istl backend to changes in VectorBase and MatrixBase

Update CMakeLists and implemented twist of globalBasisIds

Communication in MatrixNnzStructure and DofMapping corrected.

not HAVE_MPI caseimplemented

do not use lv.maxSize() for mat and vec sizes

meshcreator with overlap and periodicity for YaspGrid

Twist indexing corrected and new UniqueBorderPartition strategy implemented, In MeshCreated the MAcroGridFactory removed, since it does not work in parallel

PETScCommTest added

MatrixNNZStructure extended to include remote dnnz numbers

made all backends conforming to the new structure

MeshCreator now reads AlbertaGrid only on rank 0

make vector copyable

export BACKEND cmake flag to amdis-config file

cmake-pkg config file added

pkg config file updated and PETSc direct solver added

update MTL and PETSc backend after complex test

make petsc constraints work on DirichletNodes that are not on boundary intersections

moved AMDIS_INFO_LEVEL to Output

updated the backup-restore test and corrected some small errors

make friend use struct instead of class for Constraints

change solver parameter in generated initfile of amdisproject and add PETSc to README

update choldmod to recent dune version
parent 56c073fc
......@@ -38,13 +38,14 @@ Additionally the following optional libraries can be used:
- [MTL4](https://gitlab.math.tu-dresden.de/spraetor/mtl4) (use this fork to get up-to-date changes)
- [Eigen3](http://eigen.tuxfamily.org) >= 3.3
- [SuiteSparse](http://faculty.cse.tamu.edu/davis/suitesparse.html)
- [PETSc](https://www.mcs.anl.gov/petsc)
- libalberta >= 3.0 (For Alberta-Grids)
And a compiler that supports the C++14 standard, e.g. g++ >= 5.0 and clang >= 3.6, and cmake >= 3.1.
By default, the `dune-istl` linear-algebra backend is used. To choose one of `ISTL`, `MTL`, or `EIGEN`, you can specify the cmake parameter `-DBACKEND=[ISTL,MTL,EIGEN]`.
By default, the `dune-istl` linear-algebra backend is used. To choose one of `ISTL`, `MTL`, `PETSC`, or `EIGEN`, you can specify the cmake parameter `-DBACKEND=[ISTL,MTL,PETSC,EIGEN]`.
If your MTL4 installation is not found by default, you have to specify the path,
If, for example, your MTL4 installation is not found by default, you have to specify the path,
where the file `MTLConfig.cmake` is found, here called `MTL_ROOT`. Then simply use
`dunecontrol` to configure and `cmake` to build:
......
......@@ -78,6 +78,9 @@ if ((PETSC_STATIC_FOUND OR PETSC_FOUND) AND NOT TARGET PETSc::PETSc)
set_property(TARGET PETSc::PETSc PROPERTY
INTERFACE_COMPILE_OPTIONS "${${_prefix}_CFLAGS_OTHER}")
endif ()
# workaround for PETSc macros redefining MPI functions
set_property(TARGET PETSc::PETSc PROPERTY
INTERFACE_COMPILE_DEFINITIONS "PETSC_HAVE_BROKEN_RECURSIVE_MACRO=1")
endif ()
unset(_prefix)
......
......@@ -17,6 +17,14 @@
#include <amdis/linearalgebra/eigen/MatrixBackend.hpp>
#include <amdis/linearalgebra/eigen/VectorBackend.hpp>
#elif HAVE_PETSC
#include <amdis/linearalgebra/petsc/Constraints.hpp>
#include <amdis/linearalgebra/petsc/SolverCreator.hpp>
#include <amdis/linearalgebra/petsc/Traits.hpp>
#include <amdis/linearalgebra/petsc/MatrixBackend.hpp>
#include <amdis/linearalgebra/petsc/VectorBackend.hpp>
#else // ISTL
#include <amdis/linearalgebra/istl/Constraints.hpp>
......
......@@ -20,6 +20,8 @@ if (BACKEND STREQUAL "MTL")
add_subdirectory("mtl")
elseif (BACKEND STREQUAL "EIGEN")
add_subdirectory("eigen")
elseif (BACKEND STREQUAL "PETSC")
add_subdirectory("petsc")
else ()
add_subdirectory("istl")
endif ()
install(FILES
Communication.hpp
Constraints.hpp
Interface.hpp
MatrixBackend.hpp
MatrixNnzStructure.hpp
MatrixNnzStructure.inc.hpp
PetscRunner.hpp
SolverCreator.hpp
Traits.hpp
VectorBackend.hpp
DESTINATION ${CMAKE_INSTALL_INCLUDEDIR}/amdis/linearalgebra/petsc)
#pragma once
#include <petsc.h>
#include <dune/common/parallel/indexset.hh>
#include <dune/common/parallel/localindex.hh>
#include <dune/common/parallel/plocalindex.hh>
#include <dune/common/parallel/remoteindices.hh>
#include <dune/common/std/optional.hh>
#include <amdis/linearalgebra/Communication.hpp>
#include <amdis/linearalgebra/DOFMapping.hpp>
#include <amdis/linearalgebra/ParallelIndexSet.hpp>
namespace AMDiS
{
template <class Basis>
class DistributedCommunication
{
public:
using GlobalIdType = typename GlobalBasisIdSet<Basis>::IdType;
#if HAVE_MPI
using LocalIndexType = Dune::ParallelLocalIndex<DefaultAttributeSet::Type>;
using IndexSet = Dune::ParallelIndexSet<GlobalIdType, LocalIndexType, 512>;
using RemoteIndices = Dune::RemoteIndices<IndexSet>;
#else
struct IndexSet
{
constexpr std::size_t size() const { return size_; }
std::size_t size_ = 0;
};
struct RemoteIndices {};
#endif
using DofMap = DofMapping<IndexSet, PetscInt>;
public:
DistributedCommunication(Basis const& basis)
{
update(basis);
}
/// Return the ParallelIndexSet
IndexSet const& indexSet() const { return *indexSet_; }
IndexSet& indexSet() { return *indexSet_; }
/// Return the RemoteIndices
RemoteIndices const& remoteIndices() const { return remoteIndices_; }
RemoteIndices& remoteIndices() { return remoteIndices_; }
/// Return the DofMapping
DofMap const& dofMap() const { return dofMap_; }
DofMap& dofMap() { return dofMap_; }
/// Update the indexSet, remoteIndices and dofmapping
void update(Basis const& basis)
{
indexSet_.emplace();
#if HAVE_MPI
buildParallelIndexSet(basis, *indexSet_);
remoteIndices_.setIndexSets(*indexSet_, *indexSet_, Environment::comm());
remoteIndices_.template rebuild<true>();
#else
(*indexSet_).size_ = basis.dimension();
#endif
dofMap_.update(*this);
}
protected:
Dune::Std::optional<IndexSet> indexSet_;
RemoteIndices remoteIndices_;
DofMap dofMap_;
};
template <class Basis>
struct CommunicationCreator<DistributedCommunication<Basis>>
{
using Communication = DistributedCommunication<Basis>;
static std::unique_ptr<Communication> create(Basis const& basis, std::string const& prefix = "")
{
return std::make_unique<Communication>(basis);
}
};
} // end namespace AMDiS
#pragma once
#include <vector>
#include <amdis/linearalgebra/Constraints.hpp>
#include <amdis/linearalgebra/petsc/MatrixBackend.hpp>
#include <amdis/linearalgebra/petsc/VectorBackend.hpp>
namespace AMDiS
{
template <class Traits>
struct Constraints<MatrixBackend<Traits>>
{
using Matrix = MatrixBackend<Traits>;
using Vector = VectorBackend<Traits>;
/// Dirichlet boundary conditions.
/**
* Clear the rows (columns) of the given boundary nodes that are owned by the processor.
* The columns are only cleared if the matrix is symmetric. When clearing the columns, the scaled
* solution values are substracted from the rhs.
*
* The rhs vector is adapted to the solution values at the dirichlet nodes.
**/
template <class BitVector>
static void dirichletBC(Matrix& mat, Vector& x, Vector& b, BitVector const& nodes, bool setDiagonal = true)
{
thread_local std::vector<PetscInt> rows;
rows.clear();
auto const& dofMap = mat.dofMap();
for (std::size_t i = 0; i < nodes.size(); ++i)
if (nodes[i])
rows.push_back(dofMap.global(i));
// test for symmetry of the matrix
PetscBool set, flg;
MatIsSymmetricKnown(mat.matrix(), &set, &flg);
if (set == PETSC_TRUE && flg == PETSC_TRUE)
MatZeroRowsColumns(mat.matrix(), rows.size(), rows.data(), setDiagonal ? 1.0 : 0.0, x.vector(), b.vector());
else
MatZeroRows(mat.matrix(), rows.size(), rows.data(), setDiagonal ? 1.0 : 0.0, x.vector(), b.vector());
}
/// Periodic boundary conditions
/**
* The periodic constraints are not implemented in a symmetric fashion
**/
template <class BitVector, class Associations>
static void periodicBC(Matrix& mat, Vector& x, Vector& b, BitVector const& left, Associations const& left2right, bool setDiagonal = true)
{
thread_local std::vector<PetscInt> rows, associated_rows;
rows.clear();
associated_rows.clear();
auto const& dofMap = mat.dofMap();
for (PetscInt i = 0; i < PetscInt(left.size()); ++i) {
if (left[i]) {
// get global row index
PetscInt row_local[2] = {i, at(left2right,i)};
PetscInt row[2] = {dofMap.global(row_local[0]), dofMap.global(row_local[1])};
rows.push_back(row[0]);
associated_rows.push_back(row[1]);
// copy row to associated row
PetscInt ncols;
PetscInt const* cols;
PetscScalar const* vals;
MatGetRow(mat.matrix(), row[0], &ncols, &cols, &vals);
MatSetValues(mat.matrix(), 1, &row[1], ncols, cols, vals, ADD_VALUES);
MatRestoreRow(mat.matrix(), row[0], &ncols, &cols, &vals);
}
}
MatAssemblyBegin(mat.matrix(), MAT_FLUSH_ASSEMBLY); // Maybe MAT_FLUSH_ASSEMBLY
MatAssemblyEnd(mat.matrix(), MAT_FLUSH_ASSEMBLY);
// clear the periodic rows and add a 1 on the diagonal
MatSetOption(mat.matrix(), MAT_NO_OFF_PROC_ZERO_ROWS, PETSC_TRUE);
MatZeroRows(mat.matrix(), rows.size(), rows.data(), setDiagonal ? 1 : 0, PETSC_NULL, PETSC_NULL);
// Insert the -1 value at the off-diagonal associated columns
// NOTE: the following is probably very slow
for (std::size_t i = 0; i < rows.size(); ++i)
MatSetValue(mat.matrix(), rows[i], associated_rows[i], -1, ADD_VALUES);
MatAssemblyBegin(mat.matrix(), MAT_FINAL_ASSEMBLY); // Maybe MAT_FLUSH_ASSEMBLY
MatAssemblyEnd(mat.matrix(), MAT_FINAL_ASSEMBLY);
std::vector<PetscScalar> data(rows.size());
// copy periodic rhs values to associated rows
VecGetValues(b.vector(), rows.size(), rows.data(), data.data());
VecSetValues(b.vector(), rows.size(), associated_rows.data(), data.data(), ADD_VALUES);
VecAssemblyBegin(b.vector());
VecAssemblyEnd(b.vector());
// set the periodic rhs components to 0
data.assign(rows.size(), 0);
VecSetValues(b.vector(), rows.size(), rows.data(), data.data(), INSERT_VALUES);
VecAssemblyBegin(b.vector());
VecAssemblyEnd(b.vector());
// symmetrize the solution vector
VecGetValues(x.vector(), rows.size(), rows.data(), data.data());
VecSetValues(x.vector(), rows.size(), associated_rows.data(), data.data(), INSERT_VALUES);
VecAssemblyBegin(x.vector());
VecAssemblyEnd(x.vector());
}
private:
template <class Associations>
static PetscInt at(Associations const& m, std::size_t idx)
{
auto it = m.find(idx);
assert(it != m.end());
return it->second;
}
};
} // end namespace AMDiS
#pragma once
#include <petscksp.h>
#include <petsc/private/kspimpl.h>
#include <amdis/Output.hpp>
namespace AMDiS
{
namespace PETSc
{
// KSP routines
/// Print the residual norm at every 10th iteration.
inline PetscErrorCode KSPMonitorCyclic(KSP ksp, PetscInt n, PetscReal rnorm, void* ctx)
{
if (n % 10 == 0)
msg("iteration {:>4}: resid {:<.6e}", n, rnorm);
return 0;
}
/// Prints the residual norm, the true residual norm, and the relative residual norm at each iteration.
inline PetscErrorCode KSPMonitorNoisy(KSP ksp, PetscInt n, PetscReal rnorm, void* ctx)
{
Vec resid;
KSPBuildResidual(ksp, NULL, NULL, &resid);
PetscReal truenorm = 0.0;
VecNorm(resid, NORM_2, &truenorm);
VecDestroy(&resid);
PetscReal bnorm = 0.0;
VecNorm(ksp->vec_rhs, NORM_2, &bnorm);
msg("iteration {:>4}: resid {:<.6e}, true resid {:<.2e}, rel. resid {:<.2e}", n, rnorm, truenorm, truenorm/bnorm);
return 0;
}
/// Prints the true residual norm as well as the preconditioned residual norm at each iteration of an iterative solver.
inline PetscErrorCode KSPMonitorTrueResidualNorm(KSP ksp, PetscInt n, PetscReal rnorm, void* ctx)
{
assert(ctx != nullptr);
#if (PETSC_VERSION_MINOR >= 7)
return ::KSPMonitorTrueResidualNorm(ksp, n, rnorm, (PetscViewerAndFormat*)(ctx));
#else
return ::KSPMonitorTrueResidualNorm(ksp, n, rnorm, ctx);
#endif
}
/// Print the residual norm at each iteration of an iterative solver.
inline PetscErrorCode KSPMonitorDefault(KSP ksp, PetscInt n, PetscReal rnorm, void* ctx)
{
assert(ctx != nullptr);
#if (PETSC_VERSION_MINOR >= 7)
return ::KSPMonitorDefault(ksp, n, rnorm, (PetscViewerAndFormat*)(ctx));
#else
return ::KSPMonitorDefault(ksp, n, rnorm, ctx);
#endif
}
/// Sets an additional function to be called at every iteration to monitor the residual/error etc.
template <class Monitor>
inline PetscErrorCode KSPMonitorSet(KSP ksp, Monitor monitor)
{
#if (PETSC_VERSION_MINOR >= 7)
PetscViewerAndFormat *vf;
PetscErrorCode ierr;
ierr = ::PetscViewerAndFormatCreate(PETSC_VIEWER_STDOUT_WORLD,PETSC_VIEWER_DEFAULT,&vf);CHKERRQ(ierr);
ierr = ::KSPMonitorSet(ksp,(PetscErrorCode (*)(KSP,PetscInt,PetscReal,void*))monitor,vf,(PetscErrorCode (*)(void**))PetscViewerAndFormatDestroy);CHKERRQ(ierr);
return ierr;
#else
return ::KSPMonitorSet(ksp, monitor, PETSC_NULL, PETSC_NULL);
#endif
}
/// Gets the matrix associated with the linear system and a (possibly) different one associated with the preconditioner.
inline PetscErrorCode KSPGetOperators(KSP ksp, Mat *Amat, Mat *Pmat)
{
#if (PETSC_VERSION_MINOR >= 5)
return ::KSPGetOperators(ksp, Amat, Pmat);
#else
return ::KSPGetOperators(ksp, Amat, Pmat, PETSC_NULL);
#endif
}
/// Sets the matrix associated with the linear system and a (possibly) different one associated with the preconditioner.
inline PetscErrorCode KSPSetOperators(KSP ksp, Mat Amat, Mat Pmat)
{
#if (PETSC_VERSION_MINOR >= 5)
return ::KSPSetOperators(ksp, Amat, Pmat);
#else
return ::KSPSetOperators(ksp, Amat, Pmat, SAME_NONZERO_PATTERN);
#endif
}
// Mat routines
/// Get vector(s) compatible with the matrix, i.e. with the same parallel layout
inline PetscErrorCode MatCreateVecs(Mat mat, Vec *right, Vec *left)
{
#if (PETSC_VERSION_MINOR >= 6)
return ::MatCreateVecs(mat, right, left);
#else
return ::MatGetVecs(mat, right, left);
#endif
}
/// Gets a single submatrix on the same number of processors as the original matrix.
inline PetscErrorCode MatCreateSubMatrix(Mat mat, IS isrow, IS iscol, MatReuse cll, Mat *newmat)
{
#if (PETSC_VERSION_MINOR >= 8)
return ::MatCreateSubMatrix(mat, isrow, iscol, cll, newmat);
#else
return ::MatGetSubMatrix(mat, isrow, iscol, cll, newmat);
#endif
}
/// Removes all the components of a null space from a vector.
inline PetscErrorCode MatNullSpaceRemove(MatNullSpace sp, Vec vec)
{
#if (PETSC_VERSION_MINOR >= 5)
return ::MatNullSpaceRemove(sp, vec);
#else
return ::MatNullSpaceRemove(sp, vec, PETSC_NULL);
#endif
}
// PC routines
#if (PETSC_VERSION_MINOR >= 9)
/// Sets the software that is used to perform the factorization
inline PetscErrorCode PCFactorSetMatSolverType(PC pc, MatSolverType stype)
{
return ::PCFactorSetMatSolverType(pc, stype);
}
#else
/// Sets the software that is used to perform the factorization
template <class MatSolverType>
inline PetscErrorCode PCFactorSetMatSolverType(PC pc, MatSolverType stype)
{
return ::PCFactorSetMatSolverPackage(pc, stype);
}
#endif
/// Prints the options that have been loaded. This is useful for debugging purposes.
inline PetscErrorCode PetscOptionsView(PetscViewer viewer)
{
#if (PETSC_VERSION_MINOR >= 7)
return ::PetscOptionsView(PETSC_NULL, viewer);
#else
return ::PetscOptionsView(viewer);
#endif
}
// PETSc routine
/// Inserts options into the database from a string
inline PetscErrorCode PetscOptionsInsertString(const char in_str[])
{
#if (PETSC_VERSION_MINOR >= 7)
return ::PetscOptionsInsertString(PETSC_NULL, in_str);
#else
return ::PetscOptionsInsertString(in_str);
#endif
}
} // end namespace PETSc
} // end namespace AMDiS
\ No newline at end of file
#pragma once
#include <algorithm>
#include <memory>
#include <vector>
#include <petscmat.h>
#include <amdis/Output.hpp>
#include <amdis/linearalgebra/SymmetryStructure.hpp>
#include <amdis/linearalgebra/petsc/MatrixNnzStructure.hpp>
namespace AMDiS
{
/// \brief The basic container that stores a base matrix
template <class T>
class MatrixBackend
{
friend struct Constraints<MatrixBackend<T>>;
using Communication = typename T::Comm;
public:
using Traits = T;
/// The matrix type of the underlying base matrix
using BaseMatrix = ::Mat;
/// The type of the elements of the DOFMatrix
using value_type = PetscScalar;
/// The index/size - type
using size_type = PetscInt;
public:
/// Constructor. Constructs new BaseMatrix.
MatrixBackend(std::shared_ptr<Communication> comm)
: comm_(std::move(comm))
{}
// disable copy and move operations
MatrixBackend(MatrixBackend const&) = delete;
MatrixBackend(MatrixBackend&&) = delete;
MatrixBackend& operator=(MatrixBackend const&) = delete;
MatrixBackend& operator=(MatrixBackend&&) = delete;
~MatrixBackend()
{
destroy();
}
/// Return a reference to the data-matrix \ref matrix
BaseMatrix& matrix()
{
return matrix_;
}
/// Return a reference to the data-matrix \ref matrix
BaseMatrix const& matrix() const
{
return matrix_;
}
/// Insert a single value into the matrix
template <class RowIndex, class ColIndex>
void insert(RowIndex const& r, ColIndex const& c, PetscScalar value)
{
PetscInt row = dofMap().global(r);
PetscInt col = dofMap().global(c);
MatSetValue(matrix_, row, col, value, ADD_VALUES);
}
/// Insert an element-matrix with row-indices == col-indices
template <class LocalInd, class LocalMatrix>
void scatter(LocalInd const& localInd, LocalMatrix const& localMat)
{
thread_local std::vector<PetscInt> idx;
idx.resize(localInd.size());
// create a vector of global indices from the local indices using the local-to-global map
std::transform(localInd.begin(), localInd.end(), idx.begin(),
[&](auto const& mi) { return this->dofMap().global(mi); });
MatSetValues(matrix_, idx.size(), idx.data(), idx.size(), idx.data(), localMat.data(), ADD_VALUES);
}
/// Insert an element-matrix
template <class RowLocalInd, class ColLocalInd, class LocalMatrix>
void scatter(RowLocalInd const& rowLocalInd, ColLocalInd const& colLocalInd, LocalMatrix const& localMat)
{
thread_local std::vector<PetscInt> ri;
thread_local std::vector<PetscInt> ci;
ri.resize(rowLocalInd.size());
ci.resize(colLocalInd.size());
// create vectors of global indices from the local indices using the local-to-global map
std::transform(rowLocalInd.begin(), rowLocalInd.end(), ri.begin(),
[&](auto const& mi) { return this->dofMap().global(mi); });
std::transform(colLocalInd.begin(), colLocalInd.end(), ci.begin(),
[&](auto const& mi) { return this->dofMap().global(mi); });
MatSetValues(matrix_, ri.size(), ri.data(), ci.size(), ci.data(), localMat.data(), ADD_VALUES);
}
/// Create and initialize the matrix
template <class RowBasis, class ColBasis>
void init(RowBasis const& rowBasis, ColBasis const& colBasis, SymmetryStructure symmetry)
{
Dune::Timer t;
// update the non-zeros structure of the matrix.
pattern_.update(rowBasis, *comm_);
info(3, " update pattern needed {} seconds", t.elapsed());
t.reset();
// destroy an old matrix if created before
destroy();
info(3, " destroy old matrix needed {} seconds", t.elapsed());
t.reset();
// create a MATAIJ or MATSEQAIJ matrix with provided sparsity pattern
MatCreateAIJ(comm(),
dofMap().localSize(), dofMap().localSize(),
dofMap().globalSize(), dofMap().globalSize(),
0, pattern_.d_nnz().data(), 0, pattern_.o_nnz().data(), &matrix_);
// keep sparsity pattern even if we delete a row / column with e.g. MatZeroRows
MatSetOption(matrix_, MAT_KEEP_NONZERO_PATTERN, PETSC_TRUE);
// set symmetriy properties of the matrix
switch (symmetry) {
case SymmetryStructure::spd:
MatSetOption(matrix_, MAT_SPD, PETSC_TRUE);
break;
case SymmetryStructure::symmetric:
MatSetOption(matrix_, MAT_SYMMETRIC, PETSC_TRUE);
break;
case SymmetryStructure::hermitian:
MatSetOption(matrix_, MAT_HERMITIAN, PETSC_TRUE);
break;
case SymmetryStructure::structurally_symmetric:
MatSetOption(matrix_, MAT_STRUCTURALLY_SYMMETRIC, PETSC_TRUE);
break;
default:
/* do nothing */
break;
}
info(3, " create new matrix needed {} seconds", t.elapsed());