The article pretty much sums up why I've been a bigger fan of OData than GraphQL, especially in the business cases. OData will still let you get all those same wins that GraphQL does but without a sql-ish query syntax, and sticking to the REST roots that the web works better with. Also helps that lots of Microsoft services work out of the box with OData.
- Overly verbose endpoint & request syntax: $expand, parenthesis and quotes in paths, actions etc.
- Exposes too much filtering control by default, allowing the consumer to do "bad things" on unindexed fields without steering them towards the happy path.
- Bad/lacking open source tooling for portals, mocks, examples, validation versus OpenAPI & graphQL.
It all smells like unpolished MS enterprise crap with only internal MS & SAP adoption TBH.
Does that setup work ok outside in windy environments? A lot of the 'active' audio systems I've found really focus on the wind noise while hearing aids will try to filter that out
You definitely have a point there. The 3DME uses small MEMS microphones embedded in the faceplate of the earphone and has no physical windscreen or noise suppression like digital hearing aids and consumer buds which use aggressive software algorithms to detect wind and instantly cut the low frequencies to stop the rumble.
As a workaround, some artists performing outside wear a thin, acoustically transparent beanie or headband over the ears effectively acting as a pop-filter/windscreen. This breaks the wind before it hits the mic while still letting mid-to-high frequency sound (speech/music) pass through. Not exactly a hearing aid alternative.
I've been writing the Teams plugin for Pidgin for the past few years and its great to be able to give users the kind of features or privacy options that you'd never get from official clients, while learning lots of fun ways (ab)using web tech to solve problems.
Like how in Teams, it does a "reverse webhook" to push presence notifications to a web client acting as a server over a websocket
One of the biggest contributors I've had in the past for SQLite blocking was disk fragmentation.
We had some old Android tablets using our app 8 hours a day for 3-4 years. They'd complain if locking errors and slowness but every time they'd copy their data to send to us, we couldn't replicate, even on the same hardware. It wasn't until we bought one user a new device and got them to send us the old one that we could check it out. We thought maybe the ssd had worn out over the few years of continual use but installing a dev copy of our app was super fast. In the end what did work was to "defrag" the db file by copying it to a new location, deleting the original, then moving it back to the same name. Boom, no more "unable to open database" errors, no more slow downs.
I tried this on Jellyfin dbs a few months ago after running it for years and then suddenly running into performance issues, it made a big difference there too.
Copying the file likely forces the creation of a new one with no or lower filesystem fragmentation (e.g. a 1MB file probably gets assigned to 1MB of consecutive FS blocks). Then those FS blocks likely get assigned to flash dies in a way that makes sense (i.e. the FS blocks are evenly distributed across flash dies). This can improve I/O perf by some constant factor. See https://www.usenix.org/system/files/fast24-jun.pdf for instance for more explanation.
I would say that the much more common degradation is caused by write amplification due to a nearly full flash drive (or a flash drive that appears nearly full to the FTL because the system doesn't implement some TRIM-like mechanism to tell the FTL about free blocks). This generally leads to systemwide slowdown though rather than slowdown accessing just one particular file.
This was especially prevalent on some older Android devices which didn't bother to implement TRIM or an equivalent feature (which even affected the Google devices, like the Nexus 7).
> That's much more likely flash degradation than actual fragmentation. Did you use cheap tablets with eMMC storage?
My understanding of the parent reply's situation is that this was happening on the tablets of their users, so it kinda doesn't matter that it can be avoided by not using cheap tablets.
Most apps aren't in a position to tell their users that they are on their own when they run into what feels like an unreasonable app slowdown because they didn't buy a good enough device to run it on, especially when they've previously experienced it running just fine.
If all their apps feel like crap on that tablet, sure, that might fly... but if its only your app (or only a small set of apps that use SQLite in the same way the OP's company did) that feels like crap after a while, that's effectively a you problem (to solve) even if its not really a you problem.
In any case, its an interesting data point and could be very useful information to others who run into similar issues.
I don't quite understand what you're arguing here.
I'm merely saying that the root cause was misidentified - the performance degradation didn't happen due to fragmentation, but because the flash storage was degraded to the point where the write performance dropped significantly. This happens faster for eMMC vs. SSD-style storage.
Copying the DB file moved the data to different storage blocks which is why it (temporarily again) improved performance.
Yeah, then copying the files most likely moved them to different part of storage and it improved performance again.
Sadly that's a common plague for cheaper Android hardware - after enough writes the flash performance drops off a cliff making those devices essentially unusable :/
(More expensive hardware - including Apples - tends to have UFS type storage which lasts longer.)
We had that thought too. I'll have to try dig out what the tablets were to find out exactly what type - this would have been 3 or 4 years ago now. We sort of ruled that out because:
The other workaround to get a speed boost was the user to uninstall and reinstall the app (and then wait for all the data to download again) but that didn't fly because the users would delete before they'd synced off all their data and then data would go missing.
This was all despite having VACUUM running whenever the app started.
Whether it was bad flash or no, we still had to try resolve it as the client wouldn't buy new hardware until we could prove that we had the knowledge to make the problem go away first :/
Yeah, I don't think there's much you can do from software side there - those kind of Android devices just end up unusable due to I/O performance degradation and it's hard to keep them running.
Ext4 isn't used on Android, and it isn't immune to fragmentation.
The way ext4 reduces fragmentation is with some basic heuristics: mainly, it spreads files across the full disk instead of finding the next free spot. So they have room to grow without fragmenting. When the space gets low, it fragments just as badly as older file systems unfortunately.
(Edit: if multiple processes are concurrently reading and writing, and one process vacuums, verify that the right things happen: specifically, that concurrent writes from other processes during a vacuum don’t get erased by the other processes’ vacuum. You may need an external advisory lock to avoid data loss).
> You can VACUUM INTO, but standard vacuum won’t rewrite the whole db.
This is not true. From the link you posted:
> The VACUUM command works by copying the contents of the database into a temporary database file and then overwriting the original with the contents of the temporary file.
Can confirm, as a Teams and Skype protocol reverse engineerer for Pidgin, that most of the Teams protocol including text messages started as Skype (not the decentralised one) and has had additional layers of stuff added over the years on top. The calling for both Skype and Teams still uses a websocket with a "reverse webhook" called Trouter, which lets the client respond to events as if it were a webserver responding to webhooks, and then does a handoff to WebRTC. When I first started writing the Teams protocol plugin for libpurple, it was easier to start with the Skype plugin than to start from scratch.
For a multi-page app, one of the important uses of serviceworkers is pre-loading and caching resources that aren't on the first page. eg you might have a background image that's only displayed three pages deep but can download and cache it as soon as the site is open.
You can potentially use http2 push to send down files from the server early but I've seen the browser drop the files if they're unneeded for that page load.
Yes, there are other hacks you could do to make the browser download the resources early, like invisible images or preload audio files, but if you're going down that path why not put in a service worker instead and have it download in the background.