Biased estimate from model with latent variables

Hmm. You reasoning looks right, but it doesn’t agree with the Wikipedia or with simulation, unless I have something backward (easy to do with all this sign-based algebra). The general rule is that if

X \sim \textrm{lognormal}(\mu, \sigma),

then

c \cdot X \sim \textrm{lognormal}(\mu + \log c, \sigma).

Here your c is 1/d, so \log c = - \log d.

Assuming d is positive, this makes sense, as it will make the values sampled smaller.

I verified with R in case the Wikipedia page on lognormal was wrong:

> y = rlnorm(1e6, 0, 1)

> y2 = y / 10

> y3 = rlnorm(1e6, -log(10), 1)

> mean(y2)
[1] 0.1646022
> mean(y3)
[1] 0.1648226

> sd(y2)
[1] 0.2157505
> sd(y3)
[1] 0.2159368

So it looks like if y \sim \textrm{lognormal}(0, 1), then y /10 \sim \textrm{lognormal}(-\log 10, 1).