It's gonna be huge when someone nails it. Huge for work. Huge for home. The next Smartphone revolution; Smartphones took the Internet from a place you physically go (a desktop computer, or a laptop that's not terribly portable and that nobody's gonna carry all the time) to something that could follow you around. It put the Internet everywhere—not like water on tap, but like air.
AR glasses are going to mean the Internet's not just available everywhere, but is available on the real world, all the time, which is a pretty big distinction, about as big as the smartphone revolution.
A bunch of stuff that's kinda sucked will suddenly be great. AR tools. AR games. QR codes. Questions we used to have to ask our phones will simply be answered, automatically ("what's this song I'm hearing?", "who's that person?"). Voice assistants will become far more useful (get used to hearing a lot more people in public talking to their assistant—it'll be weird or seem dickish and annoying at first, but then, so did texting in public or talking on the phone in public, until, in a very short span, they didn't anymore)
Of course, we'll see if Apple's the one who finally cracks it. Someone's going to, I'm pretty sure, and I think all the major tech companies agree—they've all been pouring money into AR R&D for years and years, even though it's been clear the whole time that, on a phone or table or gaming device (Nintendo DS) it's got terrible UX, and there's no fixing that without new hardware. They must all believe wearables are going to be A Thing before too long.
I'm not seeing a lot of people talking on the phone outside.
Texting in public was never an issue. Not sure what you mean here.
Questions like what song is running are mundane.
The advantages of qr code are still very limited. Restaurant menu? Sure. What else? Perhaps wifi code? Okay.
But what else really? Driving perhaps I can see that. Instead of adding are in the car I might want to wear an AR car version but what then?
I don't believe people will be willing to elwear any type of glasses if they don't have to every day. If you then don't have it every day it will not become a sane default.
> I'm not seeing a lot of people talking on the phone outside.
Sure, that's down a lot as calling in general has dropped off. It was a Whole Thing early on. Especially people on earbud+mic combos or headsets.
>Texting in public was never an issue. Not sure what you mean here.
This and the call thing were a big deal in pop culture, even if the circles you ran in between about 2005-2013 weren't bothered by them (many were bothered).
> Questions like what song is running are mundane.
99% of what normal people use the Internet for is mundane. And that may be understating it.
As for the rest—I guess if you can't see it you can't see it. The use cases are endless (like, think bigger with the QR codes—not just linking to menus, but cueing AR to replace entire surfaces with something else; now apply that "think bigger" to everything else you dismissed) And maybe I'm wrong! But I'm pretty sure it's going to be a big deal.
I agree with everything you wrote, but I don't think Apple will be releasing an AR headset within 3 years. A VR headset? Sure, but AR is much more challenging.
According to various reports, Apple views this as the successor to the smart phone. Unlike everyone else, they are focusing more on everything other than games.
This also explains why Meta is still committed to plow billions into XR to secure their own viable hardware platform.
Ya, you nailed it. If AR is possible to build someone will build it and if they do it will (in due time) completely supersede the iPhone and destroy Apple's revenue. They literally cannot afford _not_ to compete in this market. Facebook's aggressive move here makes sense considering all of the scar tissue they have from not controlling their own destiny on mobile.
Arguably without the OpenGL vs D3D and now Vulkan vs D3D back and forths, along with experimental APIs like MANTLE, we definitely wouldn't have a lot of the robust tech we have access to today.
OpenGL's freeform experimentation and evolution with extensions let people test things out in production environments to figure out what worked, while D3D's stable feature set meant that games and productivity software could - if it made sense for the developer - choose to ship a more limited feature set that worked everywhere, all of the time.
D3D also has consistently offered great debugging tools and a robust reference rasterizer, things you simply can't get in an OpenGL environment. As a game developer it's invaluable to be able to swap over to a Direct3D backend for debugging even if you end up using OpenGL as your default. (These days, Vulkan has first-class debugging support too, which is great.)
Now we have Vulkan as the new home for experimentation and it has great debugging and validation layers, while D3D pushes forward on certain new features and provides a more consistent baseline on Windows desktops. For console games as well, you can use Vulkan on (AFAIK) Nintendo Switch, while using D3D12 on Xbox, so each API is providing value for console game devs as well.
> OpenGL's freeform experimentation and evolution with extensions let people test things out in production environments to figure out what worked
Yeah, except nobody did this. The cutting-edge features came to Direct3D first because Microsoft had early access to what the GPU vendors were doing and could plan out what that would look like from an API perspective. The vendor extensions only came out around the same time as the DirectX support, maybe later. Core OpenGL support would onlu emerge years later.
> As a game developer it's invaluable to be able to swap over to a Direct3D backend for debugging even if you end up using OpenGL as your default.
What? Name a game developer who's done this in the past 20 years -- use Direct3D for debugging and OpenGL as the default. Only Id Software actually used OpenGL and Vulkan seriously, and they are owned by Microsoft now so that will change.
> Name a game developer who's done this in the past 20 years
I'd imagine it was really popular among console developers, who would usually develop two versions of their game anyways (a DirectX one for Xbox/PC and an agnostic one for Mac/Playstation/other). DirectX is traditionally considered easier to use (and comes with PC tooling) so I could see how people would prefer it for debugging.
Contrary to urban myths OpenGL was never a thing in most game consoles.
Sony only supported it on PS 2, and quickly moved into LibGNM(X) as almost no one cared, Nintendo had a OpenGL like API on the Wii, and while the Switch supports GL 4.6/Vulkan, its main API is NVN.
IIRC there was an OpenGL layer for PS3, but it was terrible and I only know of one (indie) dev who shipped a full-blown game on it. It was an awful experience from my understanding.
Before Vulkan, if you wanted to do graphics debugging like step-through on pixel shaders, your options were Direct3D + PIX or, if you had access to them, game console dev tools. Maybe Apple had something?
Obviously, lots of games only had an OpenGL backend. So those devs simply didn't have access to those tools. It's the one outlier API with bad tooling.
I still recall oh so many OpenGL apps failed to start when OpenGL 2.0 was released, as 1.x had been around for so long "everyone" had gotten used to just checking the minor version number.
I'm aware of history of DX12. It started as a clone of Mantle, which they made with full knowledge that Vulkan is going to be a thing. No one stopped them from collaborating on Vulkan instead which originated in Mantle the same way. I see no excuse here but the classic NIH / lock-in push.
D3D12 is nothing like mantle. Not even close. D3D12 is heavily derived from D3D11. D3D12's own documentation is even specified as 'behaves like D3D11 except for these bits'.
If anything, Vulkan is a clone of Mantle because Vulkan is Mantle. It was donated to the Khronos Group by AMD and served as the foundation for Vulkan. If you have both API headers for Vulkan and Mantle side by side it's shocking how similar they are. Vulkan 1.0 is largely just Mantle with more API ceremony for tile-based mobile GPUs and NVidia's (at the time) far more restrictive binding model.
It was derived directly from Mantle, same as Vulkan. See the link above which literally records that historic fact.
Same way Mantle was used for Vulkan by Khronos, MS used it for their NIH becasue they didn't want for collaborative effort to reduce their grip on the gaming market. Without AMD, MS would have never came up with DX12 on their own so fast.
AMD expressed the interest in collaborative API quite early on and Mantle was presented for that very purpose. Khronos used that as intended, while MS hijacked that for their own market manipulation purposes in their usual MS only way.
The linked tweet only really shows that some of their documentation language was 'borrowed'. The actual API semantics are nothing like Mantle or Vulkan.
The synchronization model is completely different, the queue system is completely different, the memory allocation model is completely different. The binding models are massively different. Maybe some of the core ideas behind Mantle were taken with explicit synchronization, and maybe they started from Mantle as a base but the end result is completely alien to what Mantle was.
Microsoft could never justify using Vulkan over DirectX 'Next' to its developers. It would be a total deprecation of Direct3D. It would require all their developers to throw their entire renderer backend out and start fresh with 100% new tools. A lot of effort was put into making the transition to D3D12 from D3D11 easy, even to (IMO) the detriment of the API semantics. They even kept shaders inter compatible while adopting an enormously different binding model.
D3D12 is also largely a much friendlier API to use. Vulkan is (was) verbose to the extreme in ways that really didn't matter to the majority of Direct3D users on desktop GPUs. Vulkan render passes are a nightmare to work with, and largely served no benefit to the desktop and console GPUs Direct3D is used for.
Vulkan has evovled a lot since 1.0, relaxing lots of the excessive restrictions that were originally there. A lot of this stuff likely wouldn't have happened if it weren't for D3D12 putting pressure on Khronos to improve the developer experience.
Vulkan is not the panacea people think it is, but it's getting better. And so is D3D12 by borrowing some of Vulkan's better ideas. To say D3D12 should have never existed is just 'M$ bad' dogma.
I haven't looked at DirectX 12's documentation, but I really should. Khronos Group's Vulkan Samples are broken out of the box, crash on a fresh build, and the Hello Triangle API sample crashed when you minimize it. It's just a shitshow.
Alexander Overvoorde's Vulkan Tutorial is in my opinion, the de facto practical documentation for a first pass Vulkan implementation, but he also does some small things wrong that you just should not do in a production application.
It took me over 700 lines of C for a minimal replica.[1]
I'm not a fan of Vulkan not having a built-in compiler for shaders compared to OpenGL. I'm sure there's a superior technical reason for it, but I don't care because it's ruined my development experience, and reintroducing the behavior requires a sizable increase in CMake dependency overhead.
> I'm not a fan of Vulkan not having a built-in compiler for shaders compared to OpenGL.
What does that mean? The standard doesn't dictate any kind of form of implementation of the compiler. Different OpenGL drivers can use different compilers. Same for Vulkan.
Semantics drifted over time. The main input was still Mantle, no a bit of doubt about it. That documentation shows it was borrowed practically verbatim in the early form.
Vulkan also didn't keep Mantle as is in every aspect when it used it.
Point is, MS didn't need to build things from scratch. They used Mantle as the starting point, same way Khronos did.
So the question one should be asking, why MS didn't collaborate while they very well knew the collaborative API is going to happen. No one stopped them from making that collaborative API more friendly or better. But MS being MS they rushed with NIH.
I suppose it could be seen as NIH, but there are legitimate engineering benefits for Microsoft with its own API. It's important to remember that Direct3D used to be a completely separable component from Windows, and wasn't installed by default. Around the same time D3D12 and Vulkan were happening Windows also pulled D3D in as a core system component that will be universally available.
If you're an engineer trying to pick "the" GPU API for Windows the only real argument that can be made for picking Vulkan, the open API, over the internal API Microsoft already had for ~15 years by that point is that it's the 'open source friendly' choice. How do you justify using someone else's API as a core Windows API over your own solution that you already had. There's no engineering or business justification really.
It's similarly easy to construe Vulkan and Mantle as an attempt by AMD to take control of the GPU API space and specify an API particularly friendly to their hardware as the standard. They largely even succeeded considering what Vulkan became. Even D3D12's binding model is basically an exercise in how close we can get to directly exposing AMD's "anything goes" binding model while still allowing NVidia to function. It's very nice as a GPU vendor when your driver can be made closer to a no-op than your competitors.
Too many people pile on D3D12 simply because of Microsoft, rather than fairly considering the context of what created it. Apple made the same decision too with Metal, but I rarely hear any complaints there.
Engineering reasons just don't seem convincing when MS has DX only policy on Xbox and a long history of anti competitive behavior, especially in the gaming segment.
If MS would have allowed using Vulkan on Xbox for instance, I would have been more willing to give them the benefit of a doubt. But as it stands, I see them pushing DX as having lock-in motives.
PlayStation only supports Sony’s own, proprietary Gnm/Gnmx[1]. I heard their APIs are somewhat based on (very old) OpenGL, but different enough to not actually be compatible.
Nintendo Switch supporting OpenGL or Vulkan is an exception in the console space.
Very few switch games use Vulkan on the Switch. NVN is the true native graphics API on the Switch, which is Nvidia's API. Those that have tried Vulkan on Switch usually end up ditching it for NVN as Vulkan leaves too much performance on the table on a system with little power to spare.
Right, but my point is that this isn’t some famous malicious intentional lock-in by “evil Microsoft” — it’s how every console is designed. (And arguably Xbox has an advantage here because it shares DirectX with Windows)
Not really. It has nothing to do with console idea or form factor, it's just how these messed up companies are "designed" in using anti-competitive methods. See Steam Deck which uses Vulkan just fine.
There is absolutely lock-in in MS and Sony's approaches. There is no inherent need in it just becasue it's a console.
I looked into the business of issuing a stable coin back in 2018 and found that the only way to make it work so I could cover the costs of running the business was to invest the fiat into other ventures that could offer yield. At that point though the stable coin isn’t stable.
I've always been curious about this, if you have a stable coin that's in the billions like some are, can't they just put the fiat into a bank account and live on the interest rate?
(I am NOT endorsing that company, Tether / Bitfinex, or suggesting that anybody invests in it in any way. Just agreeing that if you have a lot of money, then you might profit from it)
a $1B high-yield (3.5%) savings account yields $35,000,000. Sounds like plenty to operate what is essentially a shell company plus app for managing monopoly money :)
I don't understand how this could be true: can't you just charge fees for the things which require your effort? Tether, for example, charges 0.1% for every withdrawal.
This is one of the things that is weird when people insist on free bank accounts at their banks.
Yes sure as a secondary bank account with little usage, sure, but as your primary account? How do you expect your bank to stay around if they don't make money from the basic product they offer? They will try to upsell you, charge hidden fees and reduce interest rates, make risky loans and investments and all sorts of things that aren't aligned with your interests.
With rising interest rates it should be making plenty of money
> $16b BUSD issued, 3 month tbills ~4.6%, so $713 million run rate divvied up between Paxos and Binance, of which a decent portion probably subsidized user trading fees on the /BUSD pairs
I'm trying to find a real job like this for ages.
Expert in business things business need like sap systems is highly sought after and payed well.
Best case you are broad and an expert on 2-3 things.