That is the old folklore for maximum likelihood. This is much less likely to happen if we can integrate over the posterior accurately. If the integration works, I’ve seen only extreme examples where this would happen and some of them have silly priors. I have seen this also if the integration approximation fails. See also Comparison of Bayesian predictive methods for model selection | Statistics and Computing and Model selection tutorials and talks
Usually it is the best, and if not you should think harder about your priors and prior predictive simulation. See, e.g. Sparsity information and regularization in the horseshoe and other shrinkage priors, [1810.02406] Projective Inference in High-dimensional Problems: Prediction and Feature Selection, and Visualization in Bayesian Workflow | Journal of the Royal Statistical Society Series A: Statistics in Society | Oxford Academic.
I recommend to use splines or GPs for unknown non-linearities. It’s easy with rstanarm and brms. Exp would make sense only if there would be physical model with such parameter.
Did you forgot to upload them?
Have you looked at plots that bayesplot provides?