I can reproduce it on Firefox 55 and 56 in Ubuntu Mate, it crashes after I allow the notification. It'd be useful to have a bug link so I can see what the status is. It could be an Ubuntu issue for all I know.
I'm curious why releasing data in this fashion is troubling? It think this kind of public data sharing in an open-source project is good, since it builds transparency. A top-N list seems like a good way to measure the health of the Homebrew ecosystem, similar to other package managers. Without any other identifiable information like email addresses, it seems near impossible to de-anonymize statistical usage from this set.
Interesting; I don't see apt, dpkg, nix, or yum maintainers releasing top lists of packages. Even the package providers rarely provide a "top" list, especially one with compile-time flags.
Those flags being provided in this list are very explicit, which could be easily correlated with data provided (or harvested) by other parties.
> a good way to measure the health of the Homebrew ecosystem
What value does this really provide to the general public, other than a, ahem, "member" measuring contest? You already see this occurring in this very thread - the MySQL vs. PostgreSQL comments. I fully expect a "Node vs. X" one as well.
I think it's in the same vein as Telsa's strategy of autopilot roll-out compared to Google's self-driving pilot program. 1000 self-driving vehicles running for 1 month can collect data at a significantly higher rate than 1 self-driving vehicle running for 1000 months.
It's the quality of the data and what you do with it that really matters. I think the benefits of crowdsourcing driving data to develop autopilot have been overstated. Waymo has pointed out that they are making the lion's share of their progress testing in simulators, where they can play out every conceivable variation of some strange or difficult edge case in a way that is not feasible in the real world.
One thing you can do with 1000s of fleet vehicles is begin untangling the logistics of running a publicly accessible robotaxi network, which will brings with it a whole new set of problems above and beyond the autonomous os itself.
It would be nice to see the issue of privacy, especially in regards to data privacy, evolve beyond a black and white matter. Privacy is important it allows for a bubble of safety and security in our lives. On the other hand, there is real value and utility through data, but often at the expense of privacy.
I think that the trade-off between the utility of data and privacy can be explored in such a way that everyone benefits. Ensuring a balance of power in regards to privacy between individuals, government, and private entities is a worthwhile, but difficult, venture. Data is a good medium for this discussion because of its increasing value in an age where we're collecting more than we can use. Government and society can benefit from data driven policies and public datasets (ala data.gov). Individuals can regain control and benefit from a collective use of data. Corporations already hold and make profits on massive monopolies of user data, but often liability involved with private data preempts sharing.
I think these kinds of conversations will need to be tackled and absorbed by the wider audience before it can have a real impact. In any case, it will be interesting to see what direction privacy as an issue goes toward. Personally, I will be continuing to be more aware of the issue and use products that have privacy in mind.
The goal is to reach 65mph, which isn't too difficult of a task. The only parameters that need to be changed to reach that goal are the learning inputs (the area around the car), and the network configuration. I found that having some buffer on the sides and front are helpful in recognizing the conditions for passing a slower car. The size of the hidden layer should also be big enough to take into account the different kind of situations that can happen in the simulation.
Making it on the leaderboard takes a bit more effort. I'm struggling to figure out the insight that takes me over the 70mph mark. I've toyed with the input parameters, types of hidden layers, the weighted random moves, and learning size. It's been frustrating, and has taken me down a deep rabbit hole about reinforcement learning.
If there are any tips for getting past the 'good enough' solution, I would love to hear them.
I enjoyed this tutorial as a first step into the world of deep learning frameworks. For context, I recently finished the Machine Learning course on Coursera.
I liked the parallel constructions of the neural network and the transition from linear algebra to framework. I really appreciate the ease of use of PyTorch, which pushed me over the edge into actually doing something useful with deep learning.
I managed to get through this tutorial and make a submission to the kaggle digit recognizer competition in the span of a few hours. I'm excited to figure out how to train a model more efficiently, which seems to difficult problem of choosing network hyperparameters.
Man that's awesome! Thank you so much for telling me. As a heads up, there's a completely revamped version of PyTorch being released soon (possibly today). You can see some documentation for it here. http://pytorch.org/docs/
JavaScript's domain is mostly client side web browser scripting, and no language is close to throwing it off its throne. So no, JavaScript will always have its place.
The biggest problem with Cygwin is that programs linked with Cygwin inherit global state from from a Cygwin installation on the system they're running on. If you want to produce a Windows program that just runs on any system you install it on using Cygwin, it will work right for most users, but if a power user who has Cygwin installed and has their own custom mounts, options (like different binary/text mode settings), etc. tries to run your program, it might break spectacularly. This makes Cygwin a really poor choice for making binaries you want to distribute as standalone programs.
Aside from that, Cygwin tries to hard to be a complete Unix environment on Windows, whereas midipix just gives you enough to use interfaces that were standardized in POSIX as a reasonable, uniform API for all operating systems to provide. Some functions go beyond that, but you don't have to use them. And even some things that are mandatory in POSIX are optional in midipix; as I understand it, you can choose at build time whether you want the overhead of being able to support tty devices (and the associated semantics like job control, signals from the controlling tty, etc.).
The thing that wont work is if you try to mix and match a dll from one installation with binaries from another, but I think you can agree that that situation is fair. You just need your paths setup correctly.