We analyze the conjugate gradient method with preconditioning slightly variable from one iteration to the next. To maintain the optimal convergence properties, we consider a variant that performs an explicit orthogonalization of the search directions vectors. For this method, which we refer to as "flexible" conjugate gradient, we develop a theoretical analysis that shows that the convergence rate is essentially independent of the variations in the preconditioner as long as a proper measure of these variations remains reasonably small, but not necessarily very small. Further, when the condition number is relatively small, heuristic arguments indicate that this also holds for some truncated versions of the algorithm or even for the standard conjugate gradient method. Some typical numerical experiments illustrate these conclusions while showing that the flexible variant effectively outperforms the standard conjugate gradient algorithm in several circumstances.