Hacker Newsnew | past | comments | ask | show | jobs | submit | moments's commentslogin

The parent here makes a good point, and your comment appears to represent a common misconception about neurotransmitters. Both dopamine and serotonin can do different things depending on the system and the receptors that are actively present. Serotonin, for example, is mostly in your gut, not cortex! From wikipedia:

"Approximately 90% of the human body's total serotonin is located in the enterochromaffin cells in the GI tract, where it is used to regulate intestinal movements."

Dopamine is more specific, playing an important role in the limbic system, but is not limited to it. Its functions are not completely understood, and again depend on the receptor.

Better to think of the function of neurotransmitters as dependent on both system AND receptor. The function of neurotransmitters, like dopamine, can be simplified, but others, like glutamate, are extremely complicated.


How did you pick the window size used in the analysis? Did you try different window sizes?

I have seen many time-series analyses posted here lately, and all seem to use arbitrary window sizes.

Edit: PS, thanks for posting this and being around for questions.


Service owners wanted the system to be able to identify these problems as quickly as possible so the initial goal was to go as small as possible, however the initial data source rolled data up into 1 minute windows so we figured 5 minutes was the shortest we could get away with. In practice this seems to work well for service owners as it avoids calling out small spikes as outliers.

We've got an experimental streaming version going that hasn't been set loose on any services yet, but it can get much higher granularity metrics ~ 10s (faster if we cared to).

edit: Forgot to add that we did try other window sizes, as long as 30 minutes but we found that longer windows allowed the past to influence the decision being made now too much. If it has spiked in the past we were aggressive about calling it an outlier with 30 minute windows, furthermore if it had been in lying and just become an outlier it killed our time to detect which is an important metric for us.


To Be Human: long and comprehensive (not actually written by him, but a collection of his works).

Flight of the Eagle: short and comprehensive.

Freedom from the Known: short and practical.


While I agree with the spirit of this, the conclusion is not necessarily correct. The mean can be highly informative, but should never be used alone.

Assume you know only the mean revenue and the maximum revenue (but forgot to measure variance). You could make an extreme scenario with the maximum possible variance to generate a "worst case" distribution. In this scenario, all customers either provide zero revenue or the maximum. This distribution has the maximum possible variance for a given mean and maximum.

Will you be profitable this year? Your chances will be better than the worst case scenario described above! If higher moments are known (variance, skew, etc.), more accurate bounds can be found.

In conclusion, the mean can be very useful, especially if higher moments are known.


Thank you, this looks interesting. Also, a guy named Kent has a book on this called: Psychedelic Information Theory. There is a web print of the book floating around.


Thank you for the interesting link.

>statistics tells us that 50 of them will do better than the market averages (and 50 will do worse)

This is a common misunderstanding, but actually 50% will do better than the median not the average. Averages can be dominated by extreme events, so more than 50% can do better (or worse) depending on the skew.


Median is an average, as is mean and mode. Saying median not the average makes no sense. You mean median not the mean. All three are different kinds of averages.


I never realized that the median is an average... interesting. So any average can be dominated by extreme events, except the median average.

Then we are discussing averages, in general. So, the common misconception would be:

-Averages are dominated by the median, and deviation from the mean is symmetric.-


> This is a common misunderstanding, but actually 50% will do better than the median not the average.

If the market were skewed to the degree that a symmetrical normal distribution wasn't a realistic model, then (assuming a particular skew) beating the median would be child's play, but it also wouldn't produce returns different than the average portfolio -- that average portfolio that sits at or near the mean, not the median.

Another way to say this is that, if a skewed distribution peaked at some mean value M (the value on the curve that has a zero first derivative), and if there was a pathological, nonsymmetrical tail at the right or left that shifted the median value, the majority of portfolios would remain at the mean value in spite of the asymmetry.


The author follows this rule: only recommend a strategy that is currently used.

If the advice goes badly, the author also suffers. Such recommendations are more reliable than those from authors without 'skin in the game'.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: