Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Bloom Clock (arxiv.org)
216 points by g0xA52A2A on June 4, 2019 | hide | past | favorite | 43 comments


Hi everybody. I'm the author of the paper. Let me know if you have any questions or comments.


Are you familiar with hybrid logical time systems like " HybridTime - Accessible Global Consistency with High Clock Uncertainty" (http://users.ece.utexas.edu/~garg/pdslab/david/hybrid-time-t...) or "Logical Physical Clocks and Consistent Snapshots in Globally Distributed Databases" (https://cse.buffalo.edu/tech-reports/2014-04.pdf)? I'd be interested in reading what you view as the tradeoffs between Bloom clocks and these systems.


I really like the Logical Physical Clocks paper. I'm glad you mentioned it :)

There's also Hybrid Vector Clocks which seem to accomplish the same goal as Bloom clocks:

https://pdfs.semanticscholar.org/2f05/8c7bfe3ddce90f9715842b...

http://drops.dagstuhl.de/opus/volltexte/2016/6677/pdf/LIPIcs...


Thanks for the links. I will read them during the upcoming days and hopefully find some time to write a blog or something. This looks pretty cool.


Interesting perspective! Do you have any insights whether it would work - or could be modified to work - under adversarial behavior? I.e. assuming that a fraction of the nodes are byzantine.


Very good question. I will have to think about it. Will get back to you later.


>"Whenever I go to a conference and I discover that there will be a presentation about Byzantine fault tolerance, I always feel an immediate, unshakable sense of sadness, kind of like when you realize that bad things can happen to good people, or that Keanu Reeves will almost certainly make more money than you over arbitrary time scales. Watching a presentation on Byzantine fault tolerance is similar to watching a foreign film from a depressing nation that used to be controlled by the Soviets — the only difference is that computers and networks are constantly failing instead of young Kapruskin being unable to reunite with the girl he fell in love with while he was working in a coal mine beneath an orphanage that was atop a prison that was inside the abstract concept of World War II."

James Mickens, 'The Saddest Moment' - https://scholar.harvard.edu/files/mickens/files/thesaddestmo...

edit - further wisdom;

James Mickens, Associate Professor, Authority On All Things, Harvard - https://mickens.seas.harvard.edu/wisdom-james-mickens


> or that Keanu Reeves will almost certainly make more money than you over arbitrary time scales.

Wtf? Reeves is a rather kind man who had some horrific life tragedies and yet managed a successful career (with no scandal). Why would I be sad about that?


He's being funny.

"I always feel an immediate, unshakable sense of sadness" - presents a personal and serious context - "kind of like when you realize that bad things can happen to good people," - reinforces the seriousness and invites the reader to join the writer in worthy feelings of moral empathy - "or that Keanu Reeves will almost certainly make more money than you over arbitrary time scales." - then punctures it with base motive of bitter jealousy, so you get conflicted by the two very different versions of sadness being presented as the same thing.

Structurally, it is not far off Shakespeare's lawyer joke from King Henry VI;

JACK - when I am king,– as king I will be,–

ALL - God save your majesty!

JACK - I thank you, good people:– there shall be no money; all shall eat and drink on my score; and I will apparel them all in one livery, that they may agree like brothers, and worship me their lord.

DICK. - The first thing we do, let's kill all the lawyers.

This explanation however, has probably stopped it from being funny ever again, for which I humbly apologise.


The problem with the joke is that, it depends on choosing as a target a figure who can be widely reviled. It's a good joke, but it doesn't work when the target is fluffy kittens or Keanu Reeves.


The paper talks about dynamic networks where nodes are being added or removed. If m and k are fixed, does it set an effective upper bound on the number of nodes? How do you address the possibility of multiple nodes having colliding hash values?


Excellent question. I did not address this in the paper, but I probably should have. The upper-bound you mentioned technically does not happen directly because of the nodes, but because of the increased "rate of events" (which can either happen because of the increase of nodes, or because already existing nodes suddenly create more events). Depending on m and k, there will be a rate of events that will cause things to mess up. I'm still brainstorming about this, but I believe we could fix this issue if we'd use a scalable counting bloom filter (https://haslab.uminho.pt/cbm/files/dbloom.pdf), instead of a normal counting bloom filter.


> one can increment a single value outside the bloom filter to keep the values in the bloom filter low. Example: Instead of storing bloom filter [4,3,3,5,7,4,3,3,5], one could store it as (3)[1,0,0,2,4,1,0,0,2].

CMIIW, This only works if it's guaranteed that the reset (transition from high values to low) is propagated to all the nodes. Which is not the case in a distributed system. Like, if a node is not reachable for a long time and every other node is reset except this one then its possible to have a state in which we can have false negatives.


They're equivalent representations of the same value. There's no need to propagate anything to anyone.


Ahh. You are right. I misread the (3)[....] notation. Its not reset but just different representation.


Thanks for writing this.

I'd love to hear your input on a possible application for this. I'm not sure if you're aware, but Counting Bloom Filters are useful as space efficient counters in LFU cache admission/eviction [1].

I'm wondering if bloom clocks would be a good data structure for a space efficient LRU admission/eviction. Full LRU uses a linked list and sampled LRU just uses random sampling, but my intuition is that bloom clocks might perform similarly or better than sampled LRU...

The difficult part will be dealing with concurrency. I suppose it'd be easy to treat threads as "nodes", but not all languages have the luxury of thread-local storage (Go), so I'll cross that bridge when I get there.

Anyways, I'm going to test it out later - thought you might be interested in my thoughts on a possible use.

[1]: https://arxiv.org/abs/1512.00727


This is such an elegant idea that I literally laughed with joy for a few seconds when I read the abstract. You made my morning.


I'm happy you liked it :)


> Each time a node has an internal event, it hashes that event with k hash functions and increments its bloom filter.

Isn't this effectively the same as generating k random indices? In other words, would the bloom filter work equally well if we pick k random indices to update?

In regular bloom filters this won't work because we want identical elements to collide, but here that doesn't appear to be a concern. I can see how we might want identical nodes to collide (i.e. have the same node use the same indices over and over), but that is not what is going on.


Imagine that node A and B are in sync (they have the same bloom filter). If A sends a new event to B, B can check if A's new event really results in that new bloom filter. You basically have a cryptographic proof that an event really happened at that moment. If the hashes of an event point to different indices, you know that something is wrong. Edit: Typo.


Hi,

Are you aware of https://hal.archives-ouvertes.fr/hal-01527110 ? This looks quite similar.


Hi!

My concern is:

>Each time a node has an internal event, it hashes that event with k hash functions and increments its bloom filter. It then sends that bloom filter to all other nodes.

I would suspect that the cost of broadcasting the bloom filter to the network would generally outweigh the cost of serializing a vector clock and sending it to a single node. It seems that this step in the algorithm is critical to this approach.

Do you have thoughts on this problem?


You are right, broadcasting is not a good strategy. I did not really explain the event sharing part in the paper, but as long as everyone multicasts the logical clocks, things should be scalable.


I'd love to see some experimentation with this. Nice work, thanks for sharing :)


Do the other nodes already have a copy of the bloom filter? If so, could you send an incremental update?

You've only changed the bloom filter in K places corresponding to your K hash functions, so seems like you could send a list of the changes you made.

(Disclaimer: I haven't read the paper, so I might be way off here.)


Even with that strategy you need N messages.

With 64 bits per clock component you can record a billion events per second (literally) for a very long time (as in billions of seconds) before needing to coordinate a clock reset. With 20,000 nodes and a MTU of 1250 bytes you'd only need 128 packets with a traditional vector clock -- contrast this to the (minimum) of 20,000 packets with the Bloom clock. That effect on the network is definitely significant.

Issues like dynamic node membership are interesting, but in practice many useful distributed algorithms that use vector clocks have to carefully control membership anyway. Membership changes are probably rarer than the events tracked by vector clocks. Having a special case for membership in most circumstances treats the network better than sending N messages per event.

EDIT: in other words, suppose you encode each of the N components of the vector clock with E bits. Suppose each packet can contain P bits. As long as (E/P) < 1, the number of messages sent in the system is lower with vector clocks.


Your argument about N messages per event is wrong though. You don't have to broadcast logical clocks to every other node necessarily, that's another issue. As long as every node in the system multicasts the logical clocks, the system-wide clocks will still "move forward" together.


Thanks for sharing the paper.

I have a general question, is it best to update the clock after the event or just before or does it matter?

How often do the nodes communicate?


The internal update order shouldn't really matter, as long as you send the right logical clock to other nodes. Communication depends on the system. Imagine the bloom clock a timestmap for your messages. Whenever you send a message, you will send the timestamp with it.


I'd love a summary "hackers' explanation" if you have time, or a link to some good sample code.


You can find sample code here: https://github.com/arberiii/Bloom-clock


I think understand the advantages of using a vector clock instead of a timestamp and the advantages of a bloom clock vs vector clocks.

But since a bloom clock can sometimes return an incorrect order of events (false positives), wouldn't using the timestamp be a simpler solution on most cases that can allow some unordered events?


Author of the paper here. Imagine it like this, if your bloom filter overlaps by a lot an incoming bloom filter, what you can do is go over your logs, find the bloom filter closest by increments to the incoming bloom filter, and then compare them. If the events are not comparable, i.e. concurrent, you will know it immediately. If they are comparable, your false positive rate now will be extremely low (because you picked the bloom filter in your logs that is closest to the one you received).


It's important to keep in mind that the reason we use vector clocks in the first place is because timestamps also can return the incorrect order of events because of time drift, delivery times, etc.


I feel like this could be useful in DAG data structures.


Isn’t this a hyperloglog?


I’ve only read the abstract, but I think this does ordering where hyperloglog does presence.


Hyperloglog is an algorithm for approximating distinct elements in a set. This is focused on keeping time.


This is an intriguing paper. This seems highly related to proof of work algorithms in blockchain systems. Working out which events happened and at what time.


>> This seems highly related to proof of work algorithms in blockchain systems. Working out which events happened and at what time.

This is not a byzantine algorithm so it's got nothing to do with proof of work.


Just because it's not Byzantine doesn't mean it might not be relevant to proof of work algorithms where one of the challenges is understanding when work was completed relative to other components.


You might as well say the flap of the wings of a butterfly in Africa may later become the cause a hurricane in the US.


Or distributed databases that are vulnerable to clock skew.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: