I am starting to develop models that take into account the spatial isolation of the forest fragment of interest in driving some response variables such as diversity.

The idea is that we measured some variables in different forest fragments and we want to fit models such as:

bird diversity ~ covariates + isolation

The isolation term being computed inspired from Hanski’s incidence function (see for instance equation 10 in this paper):

isolation_i = \sum_{j \neq i} area_j * e^{-\alpha * d_{ij}}

with d_{ij} being the distance between the forest fragments, area the fragment area, and \alpha being a parameter to estimate from the data.

I gave it a first try and developed the attached model (1.6 KB). The key issue is in the derivation of the isolation value based on a custom function:

```
functions {
vector isolation(matrix distance, vector area, int id_need, real alpha){
//int n = rows(distance);
vector[id_need] isol = rep_vector(0, id_need);
//int id[n] = 1:n;
for(i in 1:id_need){
isol[i] = sum(area * exp(- alpha * distance[i,])) - area[i]; //minus area[i] to remove area effect of focal patch, based on the fact that exp(0) = 1
}
isol = (isol - mean(isol)) / sd(isol); //scale
return isol;
}
}
```

I have two questions:

- Does such approaches make sense? I am especially wondering about the estimation of the decay of the isolation function (the \alpha parameter) together with the strength of the effect of isolation (the \gamma, slope parameter) from the collected data, see the R script (964 Bytes) for simulating some data with similar structure to what I have.
- Right now this is implemented for-loop that seems to take quite some time. As I want to further expand on this model I was wondering if the function could be optimized in some ways.

Thanks in advance for your help!