This is a key point. HTTP states that URLs are opaque. Understanding that is important. Trying to leverage URL conventions is just asking for trouble. SEO complicates this a bit for crawled sites, but for web services it is a very different situation.
Beautiful URLs are convenient though. Not every API call is going to follow a succession of links from the main entry point. Do you really design opaque URLs on purpose? Does it work well for your clients? This is a genuine question, I'm quite seduced by the hypermedia story, but still wondering how it works in practice. It seems that for the most part, we don't really know yet.
I think REST and "nice" URLs are orthogonal - HATEOAS definitely doesn't require you to be assembling URLs but human readable URLs make life a bit easier as a developer so why not have them?
I think "beautiful" is the wrong word. It should be logical and should try to convey what it does. I don't care what an URL looks like, as long as I can understand what it supposed to do.
APIs are for developers, not for search engines and definitely not for the end user.
Performance and simplicity of client code come to mind. If the API is consumed by a UI, it sounds good, particularly if the API provide hypermedia controls that can help generating the UI. But for automated consumption, issuing one call to a know endpoint still seems so much more straightforward than walking down the links graph until the expected link relation is found. I guess I just have to go & try by myself because at this point it seems we don't have much documented experience to rely on.
Yes, but you shouldn't hardcode the known end point, you should cache it after a walk. At some point in the future, it may 404 or 410 on you, in which case you rewalk the API from the entry point, following the link relations that got you to the thing you are after. Then you cache it again.