Model taking too long

I’m running a model and two chains are finishing fast, but there’s always one or two that are taking too long, this last case has been running for 7 days (hierarchical model with random effects, with 12 covariates with LASSO regression and a sample of 5,000 units). Is it possible to save chains that have already run in advance?

Chain 3: Iteration:    1 / 2200 [  0%]  (Warmup)
Chain 4: Iteration:    1 / 2200 [  0%]  (Warmup)
Chain 3: Iteration:  220 / 2200 [ 10%]  (Warmup)
Chain 4: Iteration:  220 / 2200 [ 10%]  (Warmup)
Chain 4: Iteration:  440 / 2200 [ 20%]  (Warmup)
Chain 3: Iteration:  440 / 2200 [ 20%]  (Warmup)
Chain 4: Iteration:  660 / 2200 [ 30%]  (Warmup)
Chain 4: Iteration:  701 / 2200 [ 31%]  (Sampling)
Chain 3: Iteration:  660 / 2200 [ 30%]  (Warmup)
Chain 3: Iteration:  701 / 2200 [ 31%]  (Sampling)
Chain 4: Iteration:  920 / 2200 [ 41%]  (Sampling)
Chain 4: Iteration: 1140 / 2200 [ 51%]  (Sampling)
Chain 3: Iteration:  920 / 2200 [ 41%]  (Sampling)
Chain 3: Iteration: 1140 / 2200 [ 51%]  (Sampling)
Chain 3: Iteration: 1360 / 2200 [ 61%]  (Sampling)
Chain 4: Iteration: 1360 / 2200 [ 61%]  (Sampling)
Chain 3: Iteration: 1580 / 2200 [ 71%]  (Sampling)
Chain 4: Iteration: 1580 / 2200 [ 71%]  (Sampling)
Chain 3: Iteration: 1800 / 2200 [ 81%]  (Sampling)
Chain 4: Iteration: 1800 / 2200 [ 81%]  (Sampling)
Chain 4: Iteration: 2020 / 2200 [ 91%]  (Sampling)
Chain 4: Iteration: 2200 / 2200 [100%]  (Sampling)
Chain 4: 
Chain 4:  Elapsed Time: 207.983 seconds (Warm-up)
Chain 4:                787.077 seconds (Sampling)
Chain 4:                995.06 seconds (Total)
Chain 4: 
Chain 3: Iteration: 2020 / 2200 [ 91%]  (Sampling)
Chain 3: Iteration: 2200 / 2200 [100%]  (Sampling)
Chain 3: 
Chain 3:  Elapsed Time: 281.772 seconds (Warm-up)
Chain 3:                1422.75 seconds (Sampling)
Chain 3:                1704.53 seconds (Total)
Chain 3: 
Chain 2: Iteration:  220 / 2200 [ 10%]  (Warmup)```


Or even, a rewrite of the model in order to be faster?

data {
// household model
int<lower=0> n_eas; // number of EAs
int<lower=0> households[n_eas]; // number of households per eas
vector<lower=0>[n_eas] settled_area;
int<lower=1> n_regions; //
int<lower=1> n_situations; // number of regions x (urban/rural) combinations
int<lower=1, upper=n_situations> ea_to_situations[n_eas]; // classification of regions per eas
int<lower=1, upper=n_regions> situations_to_regions[n_situations]; // mapping situations into regions
int<lower=0> n_cov; // number of covariates
matrix[n_eas,n_cov] cov ;// matriz of covariates
int<lower=1> n_beta;
int<lower=1, upper=5> Ind_sit[n_eas];
}

parameters {
real baseline_hps; // real<lower=0> baseline_hps;

vector[n_regions] delta_hps_region;
real<lower=0> u_delta_region;
vector[n_situations] delta_hps_situation;
real<lower=0> u_delta_situation;
vector[n_eas] delta_hps;
real<lower=0> sigma_hps;

matrix[n_cov,n_beta] beta;
real<lower=0>tau;

}

transformed parameters{
vector[n_eas] household_per_settled;
vector<lower=0>[n_eas] lambda_pois;

for(ea in 1:n_eas){
household_per_settled[ea] = baseline_hps +
delta_hps_region[situations_to_regions[ea_to_situations[ea]]] +
delta_hps_situation[ea_to_situations[ea]] +
delta_hps[ea] +
cov[ea,]*beta[,Ind_sit[ea]];
}

for( ea in 1:n_eas){
lambda_pois[ea] = exp(household_per_settled[ea] + log(settled_area[ea]));
}

}

model {

// Observational model

households ~ poisson( lambda_pois );

// Prior model

baseline_hps ~ normal(0,10);

delta_hps_region ~ normal(0, u_delta_region);
u_delta_region ~ normal(0.5, 0.5);

delta_hps_situation ~ normal(0, u_delta_situation);
u_delta_situation ~ normal(0.5, 0.5);

delta_hps ~ normal(0, sigma_hps);
// delta_hps_new ~ normal(0, sigma_hps);
sigma_hps ~ normal(0.5, 0.5);

for (i in 1:n_beta){
beta[,i] ~ double_exponential(0, tau); //normal(0,tau);
}

tau ~ exponential(0.5);

}

I think there is always possibility to sample your chains 1 per process and then later combine samples with e.g. posterior package?