Hacker Newsnew | past | comments | ask | show | jobs | submit | lavoiems's commentslogin

That and random race conditions occurring during training.


It is an interesting perspective. My personal opinion is that this constant stimulation is costing me focus when I need to do work and ultimately hurts my capacity to do things. While the internet and computing is a strong power, it does not yet replaces our cognitive capabilities to solve problem and interact with others. I hope it never will!


I don't blame my phone or the internet for this. Before smart phones the joke was that people would procrastinate by cleaning their room or organizing their desk. It's not about the technology. When it's time to focus, focus.


Yeah but now it’s like organize millions of desks worth of distraction


> If that's the goal, why not simply found another social media company built around those values?

Twitter is one of the few social networks that actually has a network. A lot of social media failed because the cold start problem is a very hard problem to solve at this point in the game. Elon probably knows it and I would bet that this is why he is looking to buy an established social network rather than creating a new one.


It's a hard problem if you don't have the clout to get attention and/or if there's no particular demand for whatever distinction your platform offers.

Musk has market-moving clout and a cult of personality, so the remaining question would be if he has a compelling insight into distinguished features.

If he doesn't, he won't make Twitter better. If he does, he could make a greenfield platform a success.


Google failed at it multiple times. Trumps network seems to have failed


Put differently, we had an average of 7.5 incidents per year in the last decade.


ya, I wonder what the risk is of getting stuck long term like earlier this year. According to the wikipedia page it looks like it has only happened a few times in history.


Given that MV Ever Given drew a penis and balls outside the canal[1] before entering, I suspect there is a chance the jam was not an accident.

1. https://m.huffpost.com/us/entry/us_605c2e44c5b67ad3871c9afe


A neat trick that is not presented is to use https://www.connectedpapers.com/

The website presents a graph of related works clustered by similarities.


https://scite.ai does this as well (also citations are classified and analyzed whether they provide supporting or contrasting arguments to the citations)

The scite extension also works with connected papers so you can see that info there as well.

Disclaimer: I work on scite


I think that the recent events should allow for this post to stay up with a link the to previous discussion.


This repos removed the famous tests some days ago under a commit message "COMPLAINFREE". These tests can be found by looking at the git history: https://github.com/blackjack4494/yt-dlc/commit/dd2d55f10dac8...

Would it be safer to thoroughly delete these tests from the git history?


Well the tests can also be found under the DMCA repo history too, so I guess we need to take that repo down.


If someone closes the PR and fully deletes the tree from GitHub then the repo should be fine.


No. Tests are an internal technical detail and not part of the product offering to users. Apart from that, a download from youtube in itself does not violate copyright laws.


GPT-3 is a generative model, isn't it? Can you explain how you converted GPT-3 to a classification model?


There are a couple of ways to do it. You can give it a prompt that shows examples of the classification and it mimics what it thinks is the correct behavior when you feed it new unclassified input. They also have a search endpoint that lets you do classification by giving it an input along with labels as the searchable documents and using the resulting semantic relevance scores.


You can add new "heads" to GPT networks and train those heads to use GPT for new applications.


Not with GPT-3. I believe only Microsoft is allowed to do that.


Without a clear definition of AI that everyone agrees on, we will never reach AI. If an AI is considered intelligent only if it is "General", then are we, as human, even intelligent? I would argue strongly that we are missing the 'G'.


Interesting. I thought that apple mostly did closed research, yet they are publishing in public conference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: