Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could you give a bit more information to non EE experts like me:

- What do you mean by pipeline ?

I try to make a analogy with the instruction pipelining, which can increase your throughput but it's not fixing the issue that your CPU has a fixed clock rate.

- What is metastability?

- Why are you mentioning merging clocks as each asynchronous circuit is clock-less ?

EDIT: Thanks for all the replies!



It takes a certain amount of time for a signal to propagate through a series of logic gates (or other electronic components) within a chip, which are also dependent on many other factors. In most synchronous chip design, you look at the worst (slowest) timing case for the design, and constrain your clock speed to that.

You can break up critical (the longest/slowest) paths of a design through pipelining, which can be done manually, or through nice automated techniques like register retiming. Basically, you can add flops (as in D-flip flops, also known as registers) between sections of the design that can be broken into independent pipelined components.

Example:

Say you have a design that takes 10ns from start to end flops. This means the max clock speed for that component is 100MHz. If you are clever, you may be able to dice that up into 10 separate components, which are pipelined, meaning that while there is a 10 cycle startup latency, if you have continuous throughput you can run the design at up to 1GHz. Even better is that nowadays, synthesis tools can do automatic pipelining through something called register retiming. Without doing any work, you can tell the synthesis tool what clock speed you want to run at (or how many cycles you want in your pipeline), and it is able to automagically insert flops to decrease timing for the overall design.


I'm not an EE expert either :)

Pipelining in circuit design is to take one "large" operation like quoted and break it down into a series of pipeline-able steps. Then the longest stage if your pipeline becomes the slowest path. So if you can break your instruction pipeline up 4-times then you can run at a clockspeed 4x faster without hitting propagation limits.

Wikipedia covers metastability pretty well: https://en.wikipedia.org/wiki/Metastability_in_electronics

Basically any logic gate can act as an oscillator if setup or hold timing is violated. It will bounce from zero to one and no guarantee can be made to the final value. Synchronous gates reduce the probability of this to near-zero(but not completely), you can add successive gates to make it more and more less probable. Basically anything that talks with the real world has a chance to screw up and it's only statistics that keep it from happening.

Looks like the Arbiter from the article is what their solution is, although they never explicitly mention metastability: https://en.wikipedia.org/wiki/Arbiter_%28electronics%29

Interesting but had some gnarly implications when it hits a metastable state(10x slower).


A pipeline means doing an operation in little bits, each in 1 clock time - at the cost of extra latency - so a slow combinatorial function might be split into 3 pipe stages each doing 1/3 of the function with data arriving 3 clocks later

Metastability is what can happen if you change data at the instant (or close to the instant) that a synchronous flip-flop is clocked - the resulting value that's stored is neither a 1 or a 0 but instead the storage element ends up oscillating at a high frequency - this little bit of evil can infect subsequent logic stages resulting in a chip that's a horrible hot buzzy mess of crud




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: