re: data: I would imagine thereās some user friendly way to edit a
dictionary encoded as JSON (or just upload a JSON-formatted data).
Also, we could check the R and Python programs to verify that they pass
some basic tests. For example, we can check that the Stan programs
contained within compile ā even if we donāt allow any sampling or
optimization to be done.
Did a var_context ever get written for JSON? If so,
it should be easy to plumb through.
But then what about the interfaces? Do they use native
package JSON readers and the usual from-memory calls
or do they provide files and use the Stan readers?
Iām pretty sure PyStan just uses Python JSON readers and
then calls a Stan function with the memory as an argument
so that itās then read from Pythonās memory, not from JSON.
But itās not, because at least now one chain takes 2.5s with 1 core in my old laptop and 25s in Kaggle. So in practice itās only 2 minutes of three year old laptop time. Itās also often pausing for very long times for no reason, and restarting can take minutes. I really canāt recommend running Stan or rstanarm in Kaggle kernels.
Hi, just to clarify, below is the submission that I created and sent to Daniel and Thel as an example of the sort of thing Iād like to do.
There are a few relevant issues here:
Metadata (title, authors, acknowledgments, references, abstract, file descriptions, story, challenges): That would come in the Arxiv-like submission form
I have 2 Stan programs, not just 1. This is pretty important, actually, as Iād like to be able to submit a mini-project, not just individual Stan programs. Indeed, in many or even most contexts it makes sense to play around and try different models.
Thereās R code. Itās not necessary to me that the R code be able to run.
I have not supplied the Jason file or whatever. I understand from Bob that such a file can be created with one line in R. If so, Iām happy to insert that one line of code inside my R script, save the Jason file, and submit it along with everything else.
See you
Andrew