Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That would be nonparametric statistics.

No, it wouldn't. Firstly, nonparametrics in general can be a little misleading. The most common instantiations place function ("process") priors on modeling decisions that are otherwise found through trial and error. Those process priors do have their own parameters though. But more importantly, LSTMs and neural networks are very much parametric - their success come from the advances in computing and optimization that have enabled estimating these parameters in very complicated model structures.



Your CS term for parametric is not quite 1 to 1 with statistic usage for parametric.

Also what you're describing is very similar to Bayesian statistic.

> But more importantly, LSTMs and neural networks are very much parametric - their success come from the advances in computing and optimization that have enabled estimating these parameters in very complicated model structures.

Which for statistician is basically blackbox and nonparametric since you have no idea what the distribution is dude and there is no assumption of a distribution. Hence nonparametric statistic which is the answer to your question you've asked for.


Why are you talking about priors ? Nonparametric vs parametric is an axis completely orthogonal to Bayesian vs Frequentist.

We weren't talking about the "success" though, I was responding to the question "where in the body of stats literature would a neural net model lie".

I argue that would be non-parametric stats. In parametric stats the limit (#params/#data) goes to 0. For models where this is not the case, statisticians and probabilists call them non-parametric (and in certain cases semi-parametric models). Neural net, especially the deep kind (and certainly not the single layer kind) have the property that #params/#data is finite and large.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: