scikit-learn appears to have a model without data class and then a fit class, but not a model with data class.
lr = linear_model.LinearRegression() boston = datasets.load_boston() y = boston.target predicted = cross_val_predict(lr, boston.data, y, cv=10)
So that’s broken down into loading the model, then calling a big function to do
cross_val_predict is some mashup of multiple functionality.
Then I looked for a GLM:
alphas = np.logspace(-10, -2, n_alphas) clf = linear_model.Ridge(fit_intercept=False) coefs =  for a in alphas: clf.set_params(alpha=a) clf.fit(X, y) coefs.append(clf.coef_)
This also has model without data (
clf). But in this case, it creates a god object that holds not only the model, but then gets modified destructively to set parameter values to constants and another method to do a fit, which modifies it destructively, then fields that contain the fitted values.
I’d rather not follow the scikit-learn model if this is typical. This is very much like what R would let you do with
glm(), but they’d just kick off everything in one line, then let you start doing the modifiers.