Skip to content
Snippets Groups Projects

Repository graph

You can move around the graph by using the arrow keys.
Select Git revision
  • bugfix/fix_rigidbodymotion_difference
  • decasteljau
  • feature/ARRN-mod
  • feature/HM-numericalBenchmark
  • feature/HarmonicmapsBenchmark
  • feature/SimoFoxWithLocalFEfunctions
  • feature/bendingIsometries
  • feature/bendingIsometries-PBFE-Stiefel
  • feature/harmonicmapsAddons
  • feature/introduceRetractionNotion
  • feature/riemannianTRaddons
  • feature/simofoxBook
  • fix-fd-gradient-scaling
  • fix_localrodassembler_compiler_error
  • issue/vtk-namespace
  • make_rod-eoc_run
  • master default
  • releases/2.0-1
  • releases/2.1-1
  • releases/2.10
20 results
Created with Raphaël 2.2.014May131223Apr9824Mar2221191022Jan913Dec12109865328Nov151331Oct30Sep1664312Jul1185328Jun2714Feb24Jan32130Dec16Nov121196517Sep1318Jul30May25Apr2417159515Mar9825Feb2115987626Jan14121154120Dec1965325Nov2019161514124328Oct272625242321201918171614131211430Sep252222Aug191115JulIntroduce methods reduceAdd and reduceCopy for the MatrixCommunicatorFix screen outputSome cleanupAdd a method VectorCommunicator::scatter, and use itUse the new methods VectorCommunicator::reduceAdd and ...::reduceCopyIntroduce new methods reduceAdd and reduceCopyArgg, can't use FE basis size in parallel...[bugfix] Use FE basis to determine size of toplevel hasObstacles arrayComplete the parallelization for the 1st-order caseUse transfer operator even for the size of the toplevel hasObstacle fieldUse the transfer operators to determine the sizes of the hasObstacle arraysPrint status messages only if our rank is '0'Distribute energy and model decrease over all processorsSecurity branch to record the state before the riemanniantrsolver receives parallelization patchesUse 'fmin' instead of 'min', and 'fmax' instead of 'max'Do not have mpiHelper_ as a member of RiemannianTrustRegionSolver after allAllow different index objects for matrix rows and columnsAdd various helper functions needed to parallelize the GFE codeImplement distributed energy computationDistribute the grid over the processors after refinementCreate an MPIHelper object, and hand it over to the RiemannianTrustregion solverAssemble only on Interior_PartitionHand-code one specific matrix-matrix multiplicationCall MPIHelper::instance, we are starting to get this code parallelizedFix boundary values for the new smaller domainAdd missing includeFix domain dimension in the Taylor/Bertoldi/Steigmann exampleSet minimum number of iterations to 1Use a better initial iterate for the TargetSpaceTRSolver than coefficients_[0]New interpolation function that interpolates in a Euclidean space and projects onto the manifoldPartially revert previous commit: it contained some debugging stuffCompile again now that the assembleHessian method is called assembleGradientAndHessianRename loop variable to fix ambiguityDisturb the initial iterate directly in the Identity function objectSome extra commentsSwitch to second-order finite elementsSimplification: use make_sharedDownsample higher-order Lagrangian functions to first-order ones.Use new CosseratVTKWriter with an explicitly provided function space basisRemove unused typedefs
Loading