WAIC estimates the same thing as LOO, but uses computationally less stable approach and the self diagnostic is also less reliable. Thus, there is no need to consider WAIC at all. The LOOIC in loo package is just the LOO with log score multiplied by -2, but there is no benefit in multiplying the results with -2, so you can also just look at the LOO log score, ie, elpd.
See CV-FAQ for answers what are all your alternatives.
Only you can answer whether it makes sense that you estimate the predictive performance for predictions for a new participant (approximated with leave-one-participant-out) or for predictions for a new observation for an existing participant (approximated by leave-one-observation-out). So which one do you want to know? They will tell you slightly different information related to your hierarchical model (see more in CV-FAQ).
This indicates that you could consider also leave-one-week-out or leave-one-task-out cross-validations. All these different leave-something-out-patterns would tell how much information is shared between 1) patients, 2) weeks, 3) tasks, 4) individual observations, so you would target different aspects of your hierarchical model. If one of these patterns is more relevant for your research question, then choose that one. Please tell more about your research question if you want more advice. (One reason I don’t like how information criteria are usually presented is that it hides the connection to the actual research question).
Note that if you don’t aggregate at all (you could aggregate also over weeks or tasks), it’s possible that log_lik matrix will be very big and make things slow, and then you should look at the vignette Using Leave-one-out cross-validation for large data • loo