Is there really that much web safely exposable data in sqlite for this to make sense? I'm not really seeing how this is obviously better than the metadata ideas that preceded it.
Some: weather, ratings, topography, dictionaries and encyclopedias, sports scores, market prices, some other stuff. All public knowledge, but not necessarily publicly available (easily) in raw form.
I get that, but his argument is that you could do it with no additional work.
"The beauty of this technique is that you are already using SQLite because it's such a powerful database; with no additional work, you can throw it on a static file server and others can easily query it over HTTP."
I doubt very many of those already use sqlite. As soon as the additional work is added, its probably easier to just expose it as XML or JSON.
I do not understand how that would work. The clients don't have any way to synchronize with server changes, so they can read data in an inconsistent state.
I think the way you'd have to do it is to effectively publish new database versions to their own path. Symlinked as much as possible, so they can back onto the same database, but you'd do something like this:
http://host/db/1.1.0 is what you create when you add a new column. It's backwards compatible with /1.0.*, so you can either leave those paths working or you can redirect them to /1.1.0, depending on what guarantees you want to be making to your clients.
http://host/db/2.0.0 is the version with an old column deleted, but you'd want to check in your access logs that nobody was still requesting any 1.0 version before publishing it. Either way, when this gets published you want to stop serving the /1.0.* path because 1.0 and 2.0 now can't come from the same backing file. But 1.1 and 2.0 can come from the same file if you've given everyone time to stop using the deleted column.
It's not a great scheme, but it does give you a way to get new client connections onto the right version.
For clients that have a session which lives across database upgrades, I think what you'd need is a `schema_version` table the client could check however often makes sense, and let themselves get reset if they find there's a new version available.
What do you mean "works fine"? AFAIK to read consistent data you need to be reading from a database snapshot. So with every data update on the server you need to make a new snapshot (to include new data), and publish it under new version (otherwise clients might be in the middle of some query reading data from the old snapshot, and combine metadata from the old snapshot with data from the new one).
One of us does not understand how that works. According to me you need to acquire read/write locks in order to ensure consistency in presence of a writer. You can't do that by reading ranges of a static file with independent read requests.