It’s too bad there’s no way at all to know if any media whatsoever is plausibly real, say such as by a cryptographic web of trust. Even though similar mechanisms are used to secure services such as domain name resolution clearly nothing similar can be applied here. I guess we’re just doomed to be fed a diet of adulterated content.
It's simply called using common sense and relying on trusted sources.
"Cryptographic trust" is never going to prove anything. Even if a camera can encrypt a photograph to try to prove a particular photograph was taken at a particular place and time, nothing is preventing someone from spoofing a GPS signal and snapping a photo of a doctored photograph with the camera.
Adulterated content has been around for basically as long as the printing press.
Trust can never be 100% proven in anything. None of these mitigations are intended to provide that level of technically impossible certainty.
The intent of all trust systems is to make it more difficult (technically, financially) to falsify information. It’s a statistics game — the harder it is to break a trust system, the fewer people will do it. This is true for everything from house keys to PKI to currency to ID cards.
The general population doesn't care about whether an image was cryptographically signed. It's going to have zero effect.
People who don't care about the truth are going to continue to pass around edited images, while those who do get their news from trusted journalists and aren't paying attention to random images on social networks.
I never said anything about unfiltered social networks.
I already have trust filtering, just by following specific journalists whose reporting I trust. They do the work of verifying photographs, because it's part of their area of professional expertise, and I appreciate it.
Looking at the law (in Oregon) “Total calories must be posted in a conspicuous place in a font size no smaller than the price, or the least prominent font size of the description of the item. A statement listing the daily nutrient intake amounts of calories, saturated fat, and sodium.”
Seems like any legal requirements imply signage in the physical venue ?
I used to really love getting each new floppy when they were released and it was a big deal at our amiga club. My own contribution made it onto disk 14 for a 4d to 2d tesseract renderer but a second submission was pulled for violating a copyright for Tetris.
Again the author doesn’t consider crypto solutions like pgp, keybase or any kind of signed social trust graph. Why do people keep writing this sky-is-falling thesis over and over without at least arguing for or against cryptography?
Your out of hand dismissal of this is to point at technologies that practically no one uses in total and definitely no one uses for the scenarios the author is referring to (e.g. search results)?
You think suddenly everyone is going to start signing their tweets and blog posts and people will en-masse assign a trust score to said content based on the people they know and trust in their PGP keychains?
I'd like that world quite a bit but it's decidedly not going to happen - probably at all and definitely not at scale. Are you so sure that's what we'll all do in response to automated content that you're calling this article a "sky-is-falling thesis"? If so then I'm genuinely baffled by your confidence here. Where does it come from?
We see one of these essays a day at this point. I do think authors need to at least critique why they think technical solutions can’t help.
We did migrate from http to https for example. And we do use a (top down) cryptographic scheme for DNS. We also do use similar schemes for crypto currency. So we do use technology as needed when needed. I’d argue the time is coming when we need to use it in some way to secure human conversation on the net. If I am confident here it is because I see this as typical of the same transitions that forced us to use crypto elsewhere.
True PGP is a failure but I can see room for a scheme where people bother to indicate that a person is real. Nobody is going to bother indicating that a post is real or not (nobody cares).
Is the problem you’re seeing about fake content or fake people? Or both?
Does it have low value for to you to know that I myself am say 3 friend hops away from you and have say a “likelihood of being human score” of 7/10?
And it wouldn’t help you to be able to know that say many random sms messages you get or random phone calls you get or random posts or articles you have trust score of say 0/10 because nobody in your extended network of trust can attest they exist?
True fake content is hard to solve. At an intimate scale nothing can solve deception. If you’re my friend and you decide to manipulate or deceive me then there’s not much I can do. I extended trust to you and you violated it. This isn’t a new phenomena.
But the article isn’t specifically about fake content. It is also about sock puppets. It’s about an extended field of spam. Crypto can play a role at lest asserting that a post is uttered by a friend of a friend or somebody who has greater than zero trust.
This comment is bound to spark a lot of response - it hits sensitive issues that we 'the masses' may have some worries about. It speaks to a populist conspiracy fear - that there is a "them" that wants to get rid of "us". That conspiracy is not actually proven.
Like; it an 'emotional' argument but doesn't actually propose solutions. I'd like to hear more about solutions or actual push back rather than simply push the emotion button over a conspiracy fear.
True it is The Economist, which can be suspect in that it tends to reflect something of an agenda of a minority population with a lot of power. I am in fact willing to believe that there is a small group of people who would prefer there to be less people period. But it still doesn't feel unreasonable to examine our morals. I'd prefer to build coherent arguments to oppose the concepts if these are bad concepts. Not just push the emotion button a lot.
If a single family has to make decisions about having more babies or not based on budget, then why can't a planet full of people decide how many people should exist period? Why is that "bad" to even think about?
And why not 'scan' an unborn child for diseases and terminate in some cases? I don't think it implicitly is 'defacto bad' as the sentence implies. This second comment attempts to tie larger issues of population to more narrow fear based issues around abortion. It's creating a gordian knot. It's not a great argument.
Is there a way to make an argument that isn't mashing the emotion button over and over? And that isn't trying to conflate far and near issues together into an unresolvable emotional morass?
I do favor a pro-choice stance, but I want to avoid strictly falling down into a pro-choice versus pro-life stance since that is hugely loaded politically with people on the left hugely inflamed at the restriction in women's freedoms, and people on the right ostensibly outraged that every single life is not seen as sacred. As a left leaning person I do think the people on the right are not charitable, and I do think they don't actually care about lives, but rather they are using babies as emotional tools to try manipulate emotions and hold onto power. So I tend to think pro-lifers are manipulative - basically I see them as trying to manipulate me with their rhetoric. But I am willing to acknowledge that it is worth trying to think about this more; a charitable read has to include room for some pro-life arguments that human life is indeed sacred. The problem with the scanning statement however is that it leads us only to an emotional argument - and those are not solvable today. It might be nice to propose an idea or solution, not just (again) hit the emotion button.
If we have a mental model of a potential child as having the best possible outcomes, having the most joyous life, contributing the most - then of course it is an utter tragedy to deprive that child of a possible life. But I think our mental model of life should be more like what we actually see in nature; a garden that is riotous and constantly and eagerly grows, and we weed and prune that garden constantly. I'd argue that we are 'helpess gardeners' - we cannot avoid gardening, we cannot avoid stepping in the garden, we can only choose where to step. The solution space I see here that bridges left and right values is to decide who the decision maker is around if a child lives or dies. I'd argue the best decision maker is the person most closely involved and most entangled with that life - basically the mother. I'd pour dollars funding and energy into the people with utereses - rather than into judges and police. Any structural imperialism that deprives the mother of agency and turns them into a baby factory feels cruel to me but also is clearly not valuing the mommas life itself. If people on the left or the right want to fund that mother, educate that mother, argue with that mother - then they should empower the uterus owner as much as they wish - pay for schooling, education, rhetoric - whatever they want to expose them to - but delegate the power of the decision to the uterus owner. People on the right then get a chance to bombard that poor uterus owner with their campaign - but so be it - at least the energy, money and attention are placed in the right spot. This is more diverse, close to the ground, grass roots and reflects more the nature of the world - that decisions should be distributed and local - more ecological. In any world where uterus owners were male this would instantly be the case - we only consider denying women agency because we come from a patriarchy.
Probably in fact - if we want to solve far field issues like planetary population concerns, then education is probably the best way in general. The system as a whole can load balance births with locally available resources if it is allowed to do so I believe.
It's nice to hear of a few other philosophers in this area aside from Peter Singer, the Effective Altruism community and trolley car problems.
These moral scales philosophers employ do seem to rest on unquestioned axioms about the value of human time spent alive. I constantly wonder why if they are philosophers that they don't examine the axioms themselves more? Often it feels like they try to weigh say the potential of a younger person to live longer versus the weight of an older person with less time to live. I don't know that this kind of calculus really makes sense - it feels like it quickly leads to logical conundrums because it's extremely hard to precisely weigh one life versus another.
If I had to decide between two lives (or say if the Earth was going to be struck by an asteroid) it feels like a better scheme is to weight 'diversity' - to focus less on the happiness or joy or quality of life and more on trying to select for many different kinds of minds. So for example I'd try to save a diverse mixture of kinds of people from a sinking ship, or I'd try to select for people from say indigenous tribal cultures that were under-represented in outcomes. I'd probably bias mostly towards humans simply because I don't know of anything else that can cogitate, but I'd also try to include larger systems of living organisms. Selecting for diversity rather than quality of life feels like it avoids some of the tyranny of the masses kind of thinking and the worst of human-centric thinking.
> because it's extremely hard to precisely weigh one life versus another.
I'd say it's impossible to do this in any ethical way without perfect knowledge of the future. When it comes to solving this as a civil issue, this is not at all what we do. We look at the future potential value of that individual persons life and then ask a jury to decide the award.
> So for example I'd try to save a diverse mixture of kinds of people from a sinking ship
Assuming you can actually measure diversity acceptably in such conditions.
> or I'd try to select for people from say indigenous tribal cultures that were under-represented in outcomes
Without asking why they're under represented? Is a tribe of cannibals or a classroom of children more valuable?
> Selecting for diversity rather than quality of life feels like it avoids some of the tyranny of the masses kind of thinking and the worst of human-centric thinking.
You've only described racial diversity. What about diversity of lived experience? Diversity of thought? Diversity of religion? Where do these rank against race?
Authors making this argument (which we see often now) really need to sketch out why cryptographic solutions will fail.
The entire thesis rests on this erroneous sentence in the second point: “This can be done in two ways – just move to invite-only silos where you already know everybody, or big platforms where the owners do the vetting for you.”
There are more options. We employ them when we are forced to; when it becomes cheaper than not.
We can sign posts or sign that somebody is real or to sign that somebody has earned rep, or that somebody has burned their rep… and subjectively score every piece of content that crosses our phone against our social trust graph.
We can rhizomatically scale social networks, can deputize our people in our extended social network to mark content as appropriate for kids or not, or otherwise filter. There’s no specific reason why we cannot grow a kind of social nervous system that has a kind of myelin sheath against noise and spam. It doesn’t have to be specifically only people we’ve shaken hands with or our like 3 closest buddies.
One of the biggest problems in actually using cryptography in the real world is matching keys to identities.
Using cryptography to solve the fake persona problem only works if the key-identity matching problem is solved. It would be great if the fake persona problem was the impetus that managed a solution to the key-identity matching problem. But I have my doubts.
Notably the key-identity matching problem isn't technical. Its societal. "just do government provided keys" is technically easy, but society (rightfully so) is suspicious of this. Other solutions exist, with other trade-offs. SSL certificates have centralization, revocation, and weakest-link problems. PGP-keys have spoofing, verification, and usability problems (though I liked key-base's approach here). European E-id is an interesting step, one to watch, though I fear the EU bureaucratic system might make a crucial mis-step. I really like SSI based approaches, but SSI is mostly about using crypto when the key-identity matching problem has been solved, and less about solving the actual problem.
Some technical aspects that need solutions that tend to be un-acceptable are handling key-revocation, key-theft, key-loss (as in forgetting), and key-duplication.
How about we let people generate their own keys, then use those keys to make identity claims, when people can generate on their own, or which can be generated by a third party. That gives multiple options to bind identities to arbitrary social media accounts etc without needing some monolithic root of trust.
I also really like the idea of using the keys to hold some amount of value, such that if the keys ever get leaked, there is basically a built in bug bounty to alert the key holder (since the key thief has the option to take all the money). This also gives users the incentive to manage their keys in a sane way.
Social key management schemes are also super interesting, and will likely be a part of future key management schemes. That is, basically, allowing some set of friends and family to re-roll or revoke identities that have been lost or stolen.
Slowly but surely, I think this future is coming. Lots of good people are coming at it from different angles, but basically all converging on the same general concepts.
> How about we let people generate their own keys, then use those keys to make identity claims, when people can generate on their own, or which can be generated by a third party. That gives multiple options to bind identities to arbitrary social media accounts etc without needing some monolithic root of trust.
You hit the nail on the head. Matching keys to real people can be done in-person for direct friends, then through a web of trust for indirect friends. For accounts/keys/personas you find on the internet where you don't have a chain of friends, you can either rely on "trusted" third-party attestation ("holder of key 0xdeadbeef earned a degree from this university") - you may never know with complete certainty if that's a real human, a bot, or an alt account for someone you already know, and that's totally fine.
The "problem" of matching keys 1-to-1 with identities for everyone globally (brought up by grandparent post) is a massive red herring that doesn't need to be "solved".
The problem I brought up in the grandparent post wasn't (meant to be) about a 1-to-1 mapping. It was about asking "who owns this key".
Web of trust sounds nice, but it hasn't caught on. I would say because it has trouble going beyond one hop of trust if you actually consider adversaries. In person physical confirmation works, re-keying is a major hassle though.
I do like the idea of a 'web of trust is good enough for declaring you are a person'. If you get a vouching system with a recursive revocation system, that might work well enough for establishing you are a person (though not well enough for establishing which person you are).
Trusted third parties have problems. They centralize power, either in the state, or in some non-accountable organization.
Because using technology to solve human problems rarely/never works. [1] was originally written for spam and bore mostly correct. Explain how replacing spam with “AI-generated spam” changes anything? You can try to fight this stuff but it will look more like AI to detect AI (similar to our current anti-spam tech). No reason to believe cryptography has some kind of magical bullet here as it’s an unrelated problem domain. And to the person claiming you get kicked off and that prevents you from coming back ignores a) we haven’t solved being able to tie disparate online personas for a unique offline one (despite Facebook ostensibly trying really hard) b) there are all sorts of secondary problems that pop up when you try to do that (eg ignores the concept of learning from your mistakes and redemption, key things that happen frequently with the young or anyone else testing boundaries).
> Because using technology to solve human problems rarely/never works.
You're badly misunderstanding the parent post - it is not proposing a technological solution to a human problem, but a technological enforcement of a fundamentally human solution:
> subjectively score every piece of content that crosses our phone against our social trust graph...can deputize our people in our extended social network to mark content as appropriate for kids or not, or otherwise filter
This is a social web of trust, where real people do the ranking and trust assignments - the cryptography and other technology just keeps track of bookkeeping.
Given that GPT is already difficult to distinguish from a person who’s confidently wrong, how does this web of trust system solve the problem?
The belief that anything will “solve” this seems naive when there’s 20+ years of proof of this being an “unsolvable” problem despite repeated technological, social, and legislative attempts. There might be a new normal established with new battlegrounds drawn and we learn to “live” with it, but I’m willing to bet non-trivial domes of money against there being any true “solution” here.
Yes the improving strength of GPT is magnifying the problem but is unrelated to the solution. The solution space steps outside of examining the content of the message for truth. The solution is to sign the messages using a social trust graph; and scoring posts based on the social trust distance.
It's not a very compelling argument for me that a (very short) 20 year history of failing to solve this problem means that it cannot be solved.
I'm willing to bet you $1000.00 that this will be completely solved in 10 years. In 10 years we will know if content we consume is "fake" or "real". It may still be offensive, and harmful to gullible people - but it will all be scored as to how real it is. Real will be defined as the likelihood that the content comes from a real human being type person, as agreed upon by other persons in-between you and that person. Content one hop away from you, from a friend, will have a score of 100% real, and content many hops away will have lower score. You will probably start your day by sorting the content you consume by the likelihood that it is real.
I did actually work on PGP - and I'm willing to concede it didn't succeed. But DNS works and bitcoin works (technically). So we do use various flavors of cryptographic trust to make sure that actions in a network are "real" versus "forged".
And yes - it's true that bots can creep into a social trust graph... so yeah, it make take effort to keep pruning the weeds in the garden in a sense.
Note in some ways I'm not really trying to argue FOR crypto per se - I'm just saying that the OP should at least critique crypto if they want to make the thesis they are making. And I'm arguing that it is a big omission to gloss over the utility of crypto and the argument will be hard to make that crypto is not a significantly powerful modifier to the original thesis of the OP.
It's obvious to anyone with a passing familiarity of WoTs that you seed your web with people you know in real life. GPT is not "difficult to distinguish from a person who’s confidently wrong" in real life.
Are you going to be accepting direct confirmations only or indirect as well?
If you are only accepting direct confirmations, this means you are only going to talk to people who you meet in person. This is totally fine and will work, but then you don't need any new tech -- just ask for their email / phone / nickname on your favorite social site. Or make a private forum (or a Signal/Telegram/Whatsapp group) and invite them there.
If you are accepting indirect confirmation, then once the network grows big enough, there will be bots. Maybe some of your friends meet Greg, director of marketing for Widgets Inc., and correctly confirm him as real human, and then Greg will confirm an army of GPT telemarketer bots as "real humans" so they can do the sales and earn Greg a bonus. Or maybe your good friend gets malware on their computer and their key is used to confirm a bunch of keys without their knowledge.
May be a bit wise to point to something other than the Wikipedia page before claiming I don’t knowing something about an entire topic.
You seem to be intentionally or otherwise completely missing what I’m saying. PGP just establishes ownership of a private key. It doesn’t say anything about that person then choosing to sign the output of GPT or giving GPT that private key to do whatever with. And GPT can mimic whatever writing style you give it. Not hard to imagine giving it various writing samples of yours personally to learn from and imitate. So please explain how web of trust solves anything for that. Aside from trying to keep track which person in your personal network is a spam vector, because there are super connectors with thousands or tens of thousands of real-world contacts and logistically that’s not realistic to manage since most people aren’t cryptography nerds.
There’s also a bigger fundamental problem with web of trust in applying it here. Trust is not binary in that way. If I trust person A and they trust person B, trust isn’t a transitive property so in reality there’s nothing we can say about my trust about B. Also, trust isn’t binary. If I trust person A about specific science topics, that doesn’t mean that trust extends to other topics. Trust also isn’t static and sticky whereas it is generally treated as such in computer systems where trust needs to be scores and revoked automatically somehow (and then we’re back to an AI war to detect abuse and better GPT to avoid the abuse). This is also ignoring that trying to model human trust webs with a CS model that doesn’t work anything like it, isn’t a good recipe for success. Also, human trust webs have massive trust and scaling problems) (cough Theranos, WeWork, FTX, Madoff, etc etc). Notice how web of trust always sticks to basic cryptographic primitives which are easy to write papers about and solve academically but not a solved problem by any means in terms of defining what trust actually means or how a web of trust works in terms of AI content. Obviously PGP has been around forever and AI is a bit newer so maybe there will be interesting work coming out of this space at some point. AFAIK today web of trust buys you bupkis in terms of fighting GPT spam. I would recommend reading up any number of articles that discuss why pgp and signing parties failed. It’s not purely a UX issue. The bigger problem is that even in a “trusted” system, fraud arises spontaneously because it’s the prisoner’s dilemma problem - there’s a material advantage to perpetrating the fraud and there’s a better advantage to helping perpetrate fraud (much hard and longer political process to revoke that trust).
As a more concrete practical demonstration of this failure. Consider that certificate authorities, which are assumed to be “trusted core signatories” in a PKI system. A PKI systems is actually the same as a “web of trust”, it’s just that I delegate verification to a third party. As cryptocurrencies should have shown, people still prefer having a virtual account in a bank to manage those funds/ lower fees. Similarly, people will outsource the complexity of validating identities (CAs signing carts for websites). CAs have repeatedly abused this to the point where afaik the security community generally acknowledges that CAs are generally worthless - even the “good ones” struggle to do verification at scale and there’s so many CAs in typical lists that it’s basically guaranteed that there are malicious actors. And we know decentralization doesn’t actually work for end users because it’s too complicated a mental model. People want a named intermediary that they delegate responsibility to. That’s why most people defer their CA validation to browsers and operating systems. PGP would work similarly so now you’ve got people delegating key trust to Apple,
Microsoft, Google, Signal, etc etc. and nerds who use open source verified key managers and maintain their own infra to manage these lists. But that’s not a representative sample of what end users will accept at scale. So you’re back to centralized control, which will be better than status quo as OSes and browsers realistically are more resitant to handing out broken signatures. And of course maybe better algorithms and methods will be developed to solve these shortcomings.
But a lot of these issues have existed and been documented for a long long period of time independent of the vague idea of using it as a GPT detector / blocked. I’ve been around the Bay Area for 10+ years and I remember having friends who thought this would take off any day now and worked really hard to make it happen, hosting signing parties and whatnot. It didn’t and I was pretty confident it was a pure nerd activity that wouldn’t have any impact in its current form (regardless of the UX challenges - the problems are much more fundamental and worse). Web of trust is seriously hard even in the simplest possible state which is PGP and that’s failing miserably despite being around for a very long time.
Would you agree that it’s on you to provide some supporting evidence on the claim that
A) I don’t know what I’m talking about
B) something more than hand waving “sprinkle some decentralized cryptography here” and actually explain how you solve the human problems that are so important here + why PGP has largely failed but suddenly it’s going to find a second life in GPT prevention.
The same problem we have with other networks of trust: Signing works alright if you already know the people you expect messages from (but in that case, it's also no better than the invite-only social networks the OP talked about).
However, the real problem is getting to know new people or vetting messages from people you don't know. In the future the OP sketched, you can never be sure that an interesting new person isn't actually a bot. Knowing the public key of that person won't solve that problem.
Indeed. SPF and DKIM were supposed to reduce spam by ensuring that every sender was verified. Now, we have more spam than ever, and all the senders are verified (on short-lived garbage domains that are not yet on blocklists). The only DKIM failures I ever see are on legitimate mail from badly set-up lists.
It is gonna happen for sure. People will leverage powerful tools and claim it is their own voice. Me shrugs.
I more want to at least have that individual emitter be accountable for what they post; to establish continuity. I want to know that that emitter is say 3 hops from me, and is trusted by 12 friends in between, and has a general trust score of say 7/10 overall as an overall rating by my extended trust graph.
It is less that I want people to say good things, or be truthful or whatnot - I just want to know that they are real, that there is a human behind it, that that human has an opinion of some kind. The thing is that there are a ton of sock puppets that are not real - it's more about reducing the noise / spam rather than a perfect solution.
You can ban them and they'll stay banned. Today they'll come back or switch to one of the other dozen accounts they post crap on. The counter argument to that is that places like Facebook are mostly not pseudonymous and a lot of crap is posted there.
There are going to be a few cases where keys are lost or stolen. It may be possible to build multi-sig wallets that allow for you to migrate an identity to a new key. What we're looking for is some kind of statistical means of trying to reduce bad actors. Even if there was no recourse for you and you were totally screwed I'm not sure it totally invalidates the concept of having a key or a mechanism for trying to remove bad actors from social discourse. You could trip and die because you wore shoes. You could get locked out of your house... bad stuff does happen - it doesn't mean we shouldn't wear shoes or have keys.
AI generated content will likely be higher quality than what we have today. AI accounts will spend time building reputation with superior content and then burn some on an ad or spam campaign, but they’ll still have a better reputation than most people.
Even if the whole idea of cryptocurrencies is found to be a dead-end, the fact that it invigorated the research and development of SSI will change some important things about how we operate online.
There exist prototypes of tech that allows you to prove you are indeed a unique human being online [1], and reveal nothing else about your identity. Most importantly, this tech is not owned or controlled by any FAANG or government, it's an open protocol just like email.
I have listened to a podcast with an expert researcher in AI, and I remember him saying that he predicts some form of cryptographic identity will arise in order to help deal with the bot problem
[1] https://worldcoin.org/ (note, I don't work for them, and have no idea if this will be the tech that finally breaks out, I just think they're the furthest along of any of their competitors)
You can employ those techniques but real people will get blocked as spam. And the better AI can evade the closer to the bone you will have to cut. Then what? AI is interacting with your algorithm to silence voices.
Even with some defects or imperfections anything is better than what we have now - which is basically nothing.
I think the way I'd think about this is to imagine say a small community, such as a town of say 5000 people or so. While you cannot know each person individually, you can know of people by reputation. People do earn rep over time, and they can burn rep. It is true that some people will be unfairly downscored, or unfairly upscored - but I'm not really trying to argue for those fine grained situations. What I'm trying to argue for is simply distinguishing the very bad actors acting out of pure malice from injecting fake news, media and 'yellow journalism' into human conversations.
True some real people will be downscored (I prefer to think of this as downscoring bad actors rather than 'blocking'). And true an AI can 'sound very human' - but an AI or a bad actor will struggle to build up a reputation over time. An AI can't shake hands with you, it is harder for it to prove it is human... Other bad actors will presumably burn their reputations if they spit out a series of offensive, misleading, false, inflammatory or toxic posts...
Note I am not necessarily advocating for crypto per se as a way to establish social trust graphs (a'la PGP or say Keybase) but I am arguing that there are other options that the OP did not raise. I more want to see a wider discussion around ways to filter malicious media that either "centralized systems" OR "small social clubs". I'm not necessarily saying it has to be a cryptographic solution... but I do think there are more ways to have what we want.
This essay opens with comments about utility for the metaverse but then switches to a description of a grammar without strongly connecting back to why it is so valuable for programmers to learn a new language to better build the metaverse. It may be helpful but is it that much more helpful?
I understand this is a pet project for these folks but I think that a metaverse grammar should not fe functional but procedural - it should be extremely simple and accessible to novices. We want these tools to be accessible to everybody I think - unless we as programmers want to be responsible for building all interactions for all users.
But I mostly think a grammar today isn’t just about the literal parser - instead it is more about the parser and surrounding tooling. Rust for example is surrounded by tooling that helps - cargo/crates are a nice helpful way to make rust developers effective.
If I was going to devote significant energy to this topic I’d solve other problems. Any code that users write needs to be late binding - allow software agents to be pushed to the cloud and participate in already running simulations or models. Code should allows users to define granular security around libraries and components to prevent them from doing bad things. Code should be highly portable; able to run across many devices at speed - not something new that has low cross platform support.
I think a better way of thinking about this isn’t the grammar itself but the kind of “computational sandbox” one is offering - the grammar container or app runner.
WASM is a good example of thinking in a better way - portable, secure, performant. It’s more like a metaverse tool than the above grammar.
What’s especially curious to me is that Tim’s project Unreal is almost the anti-metaverse. It is utterly fixated on and built around an extremely heavy high fidelity renderer - that is not portable - that does not run across many devices. Unreal uses an old school compilation philosophy that requires behaviors to be precompiled - so if you want to add a single feature you have to tear down all the instances, recompile them, recompile the server too, distribute a new build to all participants and restart the sim. It’s a tool designed for a different era and a different ecosystem… in a sense AAA games are the opposite of a participatory constantly evolving online shares consensual world. So maybe he should fix that first.
> It is utterly fixated on and built around an extremely heavy high fidelity renderer
This is doing unreal engine a significant disservice. Unreal is significantly more than a high fidelity renderer; Unreal's replication system is excellent, as is the gameplay ability system. (To my knowledge, neither of those systems exist in any of the other major engines that are readily available). There's a laundry list of things it does pretty well,distilling it down to a renderer only is very dismissive.
> that is not portable - that does not run across many devices.
Renderers by definition aren't going to be portable, they're pretty intrinsically tied to the hardware (and OS) they're running on. Also it's silly to say it doest run on many devices - it runs in every games console for the last decade, an enormous amount of mobile devices on both android and iOS, and on all major desktop platforms natively. What more do you want?
> Unreal uses an old school compilation philosophy... So maybe he should fix that first.
Unreal definitely has some dated design decisions that are showing their age, but that's to be expected for a codebase that's 25 years old. And if you've been paying attention to what epic are doing over the past few years you would see they are working on that.
(Disclaimer: I worked for epic until recently on exactly the things you're talking about in this comment)
The minute fakes are a serious issue we will sign utterances. Exactly the same as DNS or anything actually important. Deepfakes will not be a threat or issue.
Humans more than any other life form seem to constantly be playing with social, economic and political self-organization. How energy is distributed and what is fair. We’re a highly social almost hive-like species that is constantly trying out new hive patterns and arguing about them.
Crypto is useful because it lets people play with their own schemes and nobody can stop them. It seems like most uses of crypto are for some form of token gated community.
While token gates are a valuable concept, it is true that many communities themselves are not worth joining and are inflating their perceived value.
I do think that the recent spate of critiques are fairly poor also. Critiquing antiques like bitcoin or lolling about say early DAOs falling over due to bugs doesn’t really make a compelling argument.