Live in Sydney and have had the cockatoos in our bins a few times in the past six months. They’re pretty brazen too, caught them at it a while ago and they carried on like I wasn’t there.
Started leaving a brick on the lid which seems to work.
I used it for the first time in firmware design at MSFT a few years ago, and I've used it since in the areas of protocol design. I don't use it frequently but it's a tool I'm glad I took the time to learn, and have to hand when I need it.
My experience is that, using TLA+, or other tools like it, encourages you to think precisely about the problem and at the same time exclude unnecessary details. That alone I find useful in terms of the insights it gives e.g. surfacing interesting invariants, or alternatively, invalidating them. In any case, you use the model-checker (TLC) to test these assumptions quite easily, and then bring these invariants into your code, either in the form of assertions, contracts or perhaps in property-based testing. In any case, I would say the process of writing such a specification yields benefits. Perhaps this can best be summed up by something I think Leslie Lamport once said: "If you're not writing when you're thinking, you only think you're thinking."
I should add that despite claims about TLA+ being particularly well-suited to concurrency problems, my implementation targets have never been 'concurrent' in the 'multi-threaded' sense. Rather, they have been single-threaded event-driven state machines, and I personally find that it excels in this space. However, as you will find, in TLA+, concurrency is just a matter of abstraction...
I have found that, because TLA+ doesn't 'generate code', many colleagues struggle to see the tangible benefit. I personally think that there is a gap between the whiteboard and the editor which tools like TLA+ fill very nicely. It's not a panacea or a magic bullet, and like any tool you must still exercise judgement about when to use it, and how to use it effectively, which includes understanding the limitations of the tooling (TLC cannot check arbitrary specifications).
But if I find myself faced with a potentially tricky algorithm, or indeed want to understand an existing algorithm, I reach for TLA+.
Nicely put. In my nearly three decades of software development, the most successful of my various colleagues have been the ones motivated by understanding the problem and its wider context, rather than the tech/process/language _de jour_.
This is a multiplier because it allows them to detect and ignore non-problems and work on problems whose solutions _will_ deliver value.
VCC[1] was a substantial piece of research from MSR aimed at supporting the verification of C (not C++) via a theorem prover. However, it appears in a study[2] of its application to parts of the Hyper-V code-base, there were some practical difficulties relating to proving performance in the developer workflow.
Still, fascinating to see work like this on real-world code-bases with established languages. You can also see very similar work in languages like Dafny[3].
Adding verification once you have code is generally more difficult, though in case of VCC it was helpful to have a large example code base to make sure the tool would cover the major patterns, especially relating to concurrency primitives.
I learnt and used TLA+ while working at msft, and I'm still using it today:
- anytime I have some protocol or state machine whose behaviour isn't obvious, I write a TLA+ spec.
- the process of writing it clarifies my understanding and leads to new insights. These insights directly inform the code and the tests.
- the model checker makes it very easy to check sophisticated properties of the algorithm.
I don't use it all the time, but it's a tool I'm very happy I invested in.
I understand that someone at Microsoft Research once found a bug in the XB360's memory subsystem by model checking a TLA+ spec of it. The story goes that IBM initially refused to believe the bug report. A few weeks later they admitted that such a bug did indeed exist, had been missed by all their testing and would have resulted in system crashes after about 4 hours use.
I was a long time user until yesterday. Due to working at MSFT, I had been on the 7/8 windows phone train for a few years with a few devices, latterly a 925.
I'm not a power user, I don't use social media apps, just a few daily texts, a bit of browsing, music and podcasts. I also liked the simplicity of the UI.
Ultimately, I got tired of the crappy inbuilt podcast app and lack of decent alternatives. My wife bought me a second hand apple 5s which arrived yesterday.
1. It took too long to start.
2. Navigating between specific pages of the app was slow.
3. It would sometimes get into a bad state if a podcast failed during streaming, requiring a restart of the app.
4. More generally, the audio stack seemed a little bit flakey. Very occasionally switching between music and podcasts would crash the phone.
Well, number 4 sounds like more of an OS problem really =/ I guess there's no app that could have fixed that for you.
Is it better on the iphone? I tried to switch a while back but didn't do my homework, bought an iphone 4 and though it was a nice little handset it was limited to iOS 7 I think, and most apps wouldn't install. that was my one and only foray into iphones
I concur with both these statements, but would widen the former slightly to say that it helps you think (and then validate your thinking) about any non-trivial state-machine - it doesn't have to be concurrent or distributed.
[Used to work at Microsoft, used TLA+ successfully during product development]
Started leaving a brick on the lid which seems to work.