Odd that ARM even bothered to reply. And the arguments are the expected stuff you'd expect to see from an entrenched leader. Basically it's "benefits of standardization trumps openness", though not stated that way.
A few bits struck me as amusing, like the chart with all the extensions to the ARM ISA over the years, most of which have been abandoned in modern cores. Citing the original VFP standard or Jazelle as "innovation" seems laughable.
If I were to write a ARM-vs-RISC-V paper, I'd start with the important things that are actually missing from RISC-V still, like a MMU spec.
Is that openness provides the same benefits of a standard. A ground floor that everyone can stand on, build on, and has no cost of entry (unlike a business ran standard).
Well, that's not really the whole story, though. Look at Linux. Linux is "open." Linux is also a much harder target to write software for than OS X or Windows, because a lot of people do a lot of different things with that openness, and so you can't count on a specific version of certain APIs or ABIs being there. If modern CPUs looked like modern Linux distributions, a lot more effort would be required to make software portable and to run widely. (The irony here is that ARM resembles this Tower of Babel situation a lot more than x86 does.)
In Linux if I want to fork a process to create a sub-process I call
err = fork();
err = exec(a.out);
//now two instances of a.out are running!
In windows
bRet = CreateProcess ("myapp.exe",NULL,NULL, NULL, FALSE, 0,NULL, NULL,&sui,&pi);
//now one instance of my app is running...
In conclusion, no. Linx/Posix is far nicer then Windows/NT. There's a reason we've been using the same interfaces since 1973, they're very clean and very nice.
Both of your examples explode pretty badly once you start adding the additional Linux/POSIX code to fully match the functionality that the Windows stuff has.
Show me the full linux/POSIX code to open a file with different security access modes, with different sharing models, with different dispositions (when should the new file be created, if it should be created, and what should happen to an existing file with same path), with hinting to the OS how to handle the file (should it be encrypted, should it be deleted once all handles are closed to it, etc.).
Show me the full linux/POSIX code to open a process while setting the security ACLs and whatnot for it, setting its priority level, setting the environment strings, setting up the stdin/stdout/stderr file descriptors, setting the window position (if any), etc.
Linux/POSIX aren't nicer than Windows/NT--they're simpler APIs for a simpler world. That doesn't make them better or worse, they're just different tools.
Linux/Unix has a significantly lower barrier to entry compared to Windows and a much nicer command line interface that just works by default. Want to install a C compiler? Just run apt-get install gcc and you are golden. Same for python, ruby, haskell and any other language under the sun. I've barely used Windows since I've outgrown the my fascination for computer games, it has nothing compelling to offer for someone working in a scientific environment, who is not forced to use Microsoft Office.
Yeah, sure, your one cherry-picked use case is awesome. To counter that, I recently tried putting together a wiki server using apt-get, and for some reason the apt-get installed an out of date version of the software that didn't work with the apt-get version of apache I was using, so I ended up having to manually download it anyway and patch a bunch of configuration files. The windows version just used an installer that set it all up right with a few clicks.
Maybe now you've 'outgrown' video games you can outgrow thinking that your user experience is authoritative. Better, nicer, whatever, these are all just words people use to praise things that they like.
But saying Linux / Unix has a lower barrier to entry than Windows is the kind of thing only a long-time Linux user can say with a straight face. Unless you are talking about money, and then, well... yeah.
What if you don't have aptitude (hint: apt-get won't work)? What if I don't want a command-line interface? What if I want to actually debug a large C/C++ codebase?
Python/Ruby/etc. are also available on Windows, usually with friendly installers.
Look, I'm all for Linux/POSIX from an operations standpoint, but the programming story is merely different, and to pretend otherwise makes one look foolish.
EDIT:
Also, as cobrausn pointed out, what about when the official sources for the package are hella out of date? Not so friendly then, eh?
But what exactly can they do on Windows that they couldn't figure out in something like ElementaryOS? Get infected with malware? I think most people are actually pretty clueless about how to use Windows effectively, especially now that we are deep into the Windows 8 world.
MS Office is a big one, Excel is pretty much un-rivalved when it comes to the spreadsheet game. Open/Libre Office are nice but not as good as MS Office, plus everyone already knows how to use it so businesses don't have to spend money retraining them on it.
Don't get me wrong I use OS X/Ubuntu daily and die when I have to use Windows with it's lack of POSIX compliance and abomination of a shell but that doesn't mean it's the right choice for the masses.
Okay, but Office is not Windows. It runs on Macs, and there are plenty of virtualization options for businesses to run Office for people without a Windows desktop. RHEL 7 in particular is built for it. And Microsoft's web apps are getting better all the time. I wouldn't he surprised if the web app catches up to the desktop app in the near future. And everyone's grandma does not know how to use Excel.
I found that programming for Windows is a pain in the ass compared to GNU/Linux. If I need a library, I only need to do a apt-get install XXX or download the source ode and it will compile with usually zero problems.
I'm not talking about writing software, so much as I am talking about distributing software. Yes, you can use apt-get to install whatever libraries you need, but you can't guarantee that users are going to have access to the same version as you through their distro's repo, so you have to either statically link the version of each library you wrote the code with, or wait for your software to be picked up by a maintainer for all the common distros out there.
Packages do fix this problem though. You specify dependencies in your target distros package you make, and either you duplicate that dependency graph across distros (bad idea) or you let that distros packagers handle it (good idea).
For example, you can make a deb that works on Debian, Ubuntu, and any of its derivatives with its dependency graph. You can do the same with Fedora. And the Arch ecosystem will just use PKGBUILDs of the rpm or deb to package it themselves.
This only works if you let the distros handle all the work for you. But say you need an Apache version RHEL 6 doesn't have, so now you have to build from source. And now you have to build PHP from source, because now RHEL's PHP package won't run with your new Apache. On Windows, this requires that you run two MSI files and hit okay a few times. On Linux, this means you're compiling everything from source. Linux apps are less portable between distributions than Windows apps are among Windows versions (hell, thanks to WINE a randomly picked Windows app is more likely to run on both Fedora and Ubuntu without modification than an actual Linux app is, the distribution just hides that work from you most of the time).
Things are a lot better for this in 2014 than they were in 2004 (much less 1994) but I still regularly run into things that need quite a bit of handholding to build.
OS X has good package managers but they aren't as intwined with the os as aptitude or yum, for better or worse. Homebrew and Macports are the most popular package managers.
re: linux I find that's less true than it used to be, at least on x86; I run a few pieces of non-free software, binary software (Xilinx tools, Renoise) and it's problem free. Huge statically linked blobs though.
A few bits struck me as amusing, like the chart with all the extensions to the ARM ISA over the years, most of which have been abandoned in modern cores. Citing the original VFP standard or Jazelle as "innovation" seems laughable.
If I were to write a ARM-vs-RISC-V paper, I'd start with the important things that are actually missing from RISC-V still, like a MMU spec.