Should I be worried about overfitting in context of Bayes analysis due to adding too many predictors to model?
You can get overfitting if your model is badly misspecified, e.g. using thin tailed observation model in case of thick tailed data distribution or using a bad prior for predictor weights, but if you use good models and priors then there is no such thing as too many predictors (although there are computational limits).
See the following paper (and references therein) how to set a prior in case you have much more predictors than observations:
Juho Piironen and Aki Vehtari (2017). On the Hyperprior Choice for the Global Shrinkage Parameter in the Horseshoe Prior. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, PMLR 54:905-913. http://proceedings.mlr.press/v54/piironen17a.html
The paper has Stan code, and rstanarm and brms packages have support for easily defining these priors.
In this matter, I am looking for an adequate Bayesian framework to
do feature selection before fitting final model with most relevant
See the following paper, which illustrates what happens if you “do feature selection before fitting final model with most relevant predictors.” The paper also describes the projection predictive approach which uses decision theory to do the correct thing and is able to do the selection and important part of the information in the full model.
Juho Piironen and Aki Vehtari (2017). Comparison of Bayesian
predictive methods for model selection. Statistics and Computing,
27(3):711-735. doi:10.1007/s11222-016-9649-y. http://link.springer.com/article/10.1007/s11222-016-9649-y
The code is available at https://github.com/stan-dev/projpred