Yeah, articles like this sure attract a lot of instant experts.
Of course, they have absolutely no idea of what the evidence looks like -- such as, say, the near-perfect blackbody spectrum of the cosmic microwave background, the helium abundance, the isotropy of the CMB, the angular correlation function of the CMB fluctuations, the concordance of the ages of the oldest stars and the Hubble tme, the agreement of the baryon acoustic oscillation peak with the sonic horizon at decoupling --- you know, stuff like that.
One time I got on to a bus and sat in front of an old gentleman who kept looking up at me occasionally. When he got off the bus and walked passed me he said: "Whatever it is, don't let it beat you kid.". I must have looked physically wrecked that day. Ever since then those words have stayed with me.
PHP is not dead. It remains about as popular now as it has been for some time, and shows no signs whatsoever of dying.
There's not a lot of new or interesting stuff being done in PHP, because of the limitations of the language, but it's still the main workhorse for a great deal of humdrum ordinary Web stuff: forums, blogs, wikis, smaller shopping sites, smaller content management systems. Most organizations simply don't need a bleeding edge solution, and for them something thrown together using off-the-shelf PHP-based solutions (WordPress, MediaWiki, phpBB, etc.) will be quite sufficient for their purposes.
Also, because PHP is so pervasive and so readily available, PHP developers are really easy to come by.
PHP is far from dead, yes. But JavaScript and Node are the first case where in my history where an alternative seems to stick and have proper staying power in PHP devs minds.
This never happened with RoR, for example. It was a passing fad in the mainstream. I know it's probably still used by millions, but relatively to PHP.
AVX rolls up all the previous SSE versions, and provides 3-operand versions of those instructions. Also 256b versions of most FP (AVX) and int (AVX2) insns.
We don't really think of that making SSE obsolete. More like, think of AVX as a new and better version of the same old SSE instructions. They're still in the ref manual under their non-AVX names (PSHUFB, not VPSHUFB, for example.) You can mix AVX and SSE code, as long as you use VZEROUPPER when needed to avoid the performance problem from mixing VEX with non-VEX insns (on Intel). So there is some annoyance to dealing with cases where you have to call into libraries that might run non-VEX SSE instructions, or where your code uses SSE FP math, but also has some AVX code to be run only if the CPU supports it.
If CPU-compatibility was a non-issue, the legacy-SSE versions of vector instructions would be truly obsolete, like MMX is now. AVX/AVX2 is at least slightly better in every way, if you count the VEX-encoded 128b version an insn as AVX, not SSE. Sometimes you'd still use 128b registers because your data only comes in chunks that big, but more often working with 256b registers to do the same op on twice as much data at once.
SSE/AVX/x87-FP/integer instructions all use the same execution ports. You can't get more done in parallel by mixing them. (except on Haswell, where one of the 4 ALU ports can only handle non-vector insns, like GP reg ops and branches).
The "level" of a programming language is largely based on a function of the degree to which the language provides abstractions that separate the programmer from the details of the implementation in machine code. A low-level language provides few or no such abstractions; higher-level languages provide increasingly more abstractions that separate the programmer from implementation details.
Obviously, the lowest level language is machine code itself, which has no abstractions at all, and is just sequences of numbers. This is just about as low as it gets. Assembly language provides one level of abstraction: instead of dealing with opcodes directly, the programmer uses mnemonic names for the instructions, which the assembler converts into the opcodes. The assembler will generally also allow the programmer to give memory locations names so that they can be addressed in a symbolic way instead of by their numerical addresses. But the programmer still has to write each machine instruction himself or herself, keep track of what is in what memory location and in what register, and so forth. Assembly is therefore the "middle level" of programming languages.
The next step up introduces abstraction at the instruction level. Instead of specifying the exact instructions to be used, the programmer specifies what operations are to be performed and the compiler figures out the best way to do that. Programmers no longer have to keep track of what is in what register from instruction to instruction; the compiler does that for them. This also allows for programs to be written that do not depend intrinsically on the underlying instruction set of the computer. A program written in FORTRAN (the first language at this level) should run the same way on literally thousands of different architectures (although you'll run into issues with the definition of "number" being different). This is arguably the gravamen of a "high level" programming language.
Unfortunately, once you go past this level (the level at which FORTRAN, COBOL, BASIC, and C all live, along with other relics like PL/I and Modula), it gets confusing, and often argumentative. That's because beyond this baseline, languages add different additional abstractions. Not all further abstractions are compatible with one another, and different abstractions provide different solutions to similar problems. It's not possible to arrange them into a hierarchy. Is Haskell higher level than Prolog? Is Clojure higher level than Ruby? Where does APL fit into this picture? SNOBOL? FORTH? (I've been around a while and have probably forgotten more programming languages than a lot of today's programmers know.) These questions can't be meaningfully answered because the languages are on different branches of a very complicated tree that has lots of interactions between its branches. In some cases you can make a coherent argument, such as when one language is a clear evolution of another, with additional abstraction layers added. So, for example, it is fairly reasonably arguable that C++ and Objective C are higher level than C, because both are evolved from C, and add abstractions that C lacks. But you can't really say which of C++ or Objective C is "higher level". And when dealing with languages that are not well-related to one another (such as, say Clojure and Ruby) it's quite impossible to say that one is "higher level" than the other. All you can do is enumerate the ways that they are similar and different.
So, while I've seen people try to create definitions for levels beyond "low", "middle", and "high", these definition usually reflect value judgments by the person making the definition (e.g. "functional languages are higher level than imperative" or "languages with intrinsic memory management are higher level than those with explicit memory management").
Coursera Machine Learning is one of the best places to start, but there are countless resources. There's a huge, wonderful open list of links at Github[1]. I definitely recommend you take a look at it
There are also another great resources online, like those I list below:
1.) In-depth introduction to machine learning in 15 hours of expert videos[2]
2.) Deep Learning Tutorial (@ ufldl.Stanford.edu/tutorial/, can't post the link because I'm out of mana, I mean, not enough reputation yet)
I have been trying to answer this question for myself, and one measure I've taken towards this goal is to record all of my mathematical reading, work, and random thoughts in a journal. I highly recommend the practice as it has been very illuminating to me since I started a few months ago. Reviewing my previous readings allows me to ascertain how much math I actually end up retaining from my study sessions, and keeping all of my work in one place (as opposed to throwaway scrap paper) allows me to spot any particularly common mistakes.
So far, I've found that my memory is far more tenuous than I had previously assumed. I'd look at last month's entries and realize that I'd only retained 20% of what I had learned; fine details being especially prone to slippage. Yet from analyzing my mistakes, I've also found that those very details are much more crucial than I had thought.
The result of all of this is that I've started to shift my focus from "learning new math rapidly" to "winning the uphill battle against memory loss." From this new perspective, the old adage: "the only way to learn mathematics is through doing" begins to make a lot more sense. While active learning is far from any cure to forgetfulness, given my own mnemonic capabilities I have come to see that it would probably be a better long-term investment to spend a month on fully working and understanding a chapter, than to spend the same time blazing through several chapters but skipping the exercises (having done both.)
I emphasize again that this is my own conclusion based on my own characteristics, and that is precisely why I recommend everyone to find their own answer to this question by keeping their own math notebook.
The rhetoric of "not allowing a safe space for terrorists to communicate" is complete bullshit.
Terrorists can communicate using a book cipher or pick from any of a huge number of other options. The kind of terrorists we should actually be concerned about (competent ones) will already use extra measures such as this in conjunction with strong encryption.
This is totalitarianism.
"For too long, we have been a passively tolerant society, saying to our citizens: as long as you obey the law, we will leave you alone." - David Cameron.
This character is "OGHAM SPACE MARK"[1], which is a space character. So the code is equivalent to alert(2+ 40). From what I know, any unicode character in the Zs class is a white space character in JS[2], but there don't seem to be that many[3]. However, JS also allows unicode characters in identifiers[4], which lets you use interesting variable names like ಠ_ಠ
Of course, they have absolutely no idea of what the evidence looks like -- such as, say, the near-perfect blackbody spectrum of the cosmic microwave background, the helium abundance, the isotropy of the CMB, the angular correlation function of the CMB fluctuations, the concordance of the ages of the oldest stars and the Hubble tme, the agreement of the baryon acoustic oscillation peak with the sonic horizon at decoupling --- you know, stuff like that.