Note that for really, really fast multiplication, you throw all sorts of synthesis techniques at the problem. The space (of possible multiplier implementations) is large, and people have been grinding away at the problem for a long time (note Dadda is from 1965—think about how many people have written papers about how to write slightly faster multipliers in the past 57 years). I’m sure that if you sat down and looked at the multiplier in a modern high-performance core that it would be nigh-incomprehensible to human eyes.

tails4e 56 days ago [-]

I always thought that proving a multiplier is correct is an interesting problem. Sure you can check an 8 bit one with all combos, and then scale up the algorithm to more bits, but proving a given implementation doesn't have a bug in one of the gates is more difficult. A 64 bit multiplier has 2^128 possible inputs, so cannot be exhaustively checked. Perhaps formal proof or logic equivalence and handle this, but I'd be interested in seeing how.

bsder 56 days ago [-]

> I always thought that proving a multiplier is correct is an interesting problem.

Induction proofs work fine. And 3SAT can check that your logic is equivalent.

The bigger issue is that multipliers are subject to runtime errors just like memory--so how do you check for that? The answer is that you build a multiplier that works on a smaller field (modulus) and then you check that against the big multiplier's result. If they differ, you flag.

I seem to recall that only Hitachi and IBM had processors that did this.

samatman 56 days ago [-]

It is an interesting problem to think about, for sure.

My suspicion is that this kind of verification is amenable to breaking down into stages, essentially by drawing boxes around highly-connected regions, proving those properties exhaustively, then treating the various interconnects as between two black boxes with those known properties.

I'm sure there's some deep literature on this subject, I don't even know where to begin looking for it however.

adpcm 56 days ago [-]

There is indeed research and literature about this. For example “Formal verification of multiplier circuits using computer algebra”,
Daniela Kaufmann, see https://danielakaufmann.at/publications/

These kinds of methods are used in EDA Formal Verification tools to check real multipliers, FMAs, dot-products etc in CPU & GPU designs.

klodolph 56 days ago [-]

It may be easier to prove that your multiplier synthesis algorithm is correct.

qubex 56 days ago [-]

“Anything you can do I can do meta.” - Charles Simonyi

Induction proofs work fine. And 3SAT can check that your logic is equivalent.

The bigger issue is that multipliers are subject to runtime errors just like memory--so how do you check for

that? The answer is that you build a multiplier that works on a smaller field (modulus) and then you check that against the big multiplier's result. If they differ, you flag.I seem to recall that only Hitachi and IBM had processors that did this.

My suspicion is that this kind of verification is amenable to breaking down into stages, essentially by drawing boxes around highly-connected regions, proving those properties exhaustively, then treating the various interconnects as between two black boxes with those known properties.

I'm sure there's some deep literature on this subject, I don't even know where to begin looking for it however.

These kinds of methods are used in EDA Formal Verification tools to check real multipliers, FMAs, dot-products etc in CPU & GPU designs.