I'm your target market - averaging a few dozen board designs a year with complexity ranging from simple interposers to designs at density limits with large US+ FPGAs.
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
- Resistor values for a non-inverting amplifier's gain were swapped leading to incorrect gain.
- A voltage divider supplying a status/enable pin was drawing somewhat more current than it needed to.
- The power rating of a current-sense shunt is marginal for some design conditions.
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments:
- Just this single test consumed 100% of the chat's 330k token limit and 85% of free tier capacity, so I can't even re-evaluate the capability with a more reasonable/detailed prompt, or even giving it the solution.
- A mid-step section calculates the UV/OV behaviour of a input protection device correctly, but mis-states the range in the summary.
- There were several structural errors in the analysis, including assuming that the external power supply and lithium battery share the same input path, even though the netlist and components obviously have the battery 'inside' the power management circuit. As a result most downstream analysis is completely invalid.
- The inline footnotes for datasheets output `4 [blocked]` which is a bare-minimum UI bug that you must have known about?
- The problem and solution were in the context and weren't found/used.
- Summary was sycophantic and incorrect.
You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.
In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
- Worked through each leg of the design and compared it's output to my annotated comments (and was correct).
- Added commentary about possible leakage through a TVS diode, calculated time-constants, part tolerance, and pin loadings which are the kinds of details that can get missed outside of exhaustive review.
- Hallucinated a capacitor that doesn't exist in the design, likely due to OCR error. Including the raw netlist and an unrelated in-context learning example in the dev-message resolved that issue.
So from my perspective, the following would need to happen before I'd consider a tool like this:
- Walk back your data collection terms, I don't feel they're viable for any commercial use in this space without changes.
- An explicit listing of the downstream model provider(s) and any relevant terms that flow to my data.
- I understand the technical side of "Some metadata or backup copies may persist for a limited period for security, audit, and operational continuity" but I want a specific timeline and what that metadata is. Do better and provide examples.
- I'm not going to get into the strategy side of 'paying for tokens'. but your usage limits are too vague to know what I'm getting. If I'm paying for your value add, let me bring an API key (esp if you're not using frontier models).
- My netlist includes PDF datasheet links for every part. You should be able to fetch datasheets as needed without upload.
- Literally 5 minutes of thinking about how this tool is useful for fault-finding or review would have led you to a bare-minimum set of checklist items that I could choose to run on a design automatically.
- Going further, a chat UX is horrible for this review use-case. Condensing it into a high level review of requirements and goals, with a list of review tasks per page/sub-circuit would make more sense. From there, then calculations and notes for each item can be grouped instead of spread randomly through the output summary. Output should be more like an annotated PDF.
The requirement to pull datasheets is kind of a deal-breaker. My current project has 70 BOM line items. I'm not shoving 70 datasheets into your tool, sorry.
As a reference for the OP I did a public professional-informal-mini-design-review over here a while ago: https://news.ycombinator.com/item?id=44651770 . I didn't pull any of those datasheets because I didn't need to. It would be interesting to see what your tool says about that design, and compare it to the types of things I thought needed attention.
Agreed. Tooling like this also needs far more careful structuring of the inputs than a thin wrapper like this.
It burnt a bunch of tokens and filled the context reading all datasheet files, whereas documentation should be queried to answer specific details connected to relevant netlist/sch nodes.
While there was brief discussion about environmental influence due to temperature and environmental EMI, it would be nice to get an idea of how this approach compares with regards to radiated emissions.
Fast edges and their wide spectral content are one of the earliest things to minimise/eliminate as part of compliance testing - other approaches aren't as active of an aggressor so getting to production may not be as easy given the stated goals for low cost/effort implementation.
The frequency domain data from the MXA shows the output during edge measurement, but there's no discussion about measuring how much these tones leak...
For two DRP (dual role) devices connected to each other, I believe in a default case the one that happens to advertise as a source first just becomes one.
The standard allows for a role swap at any point while connected, and if that’s triggered will be dependent on the firmware/config on one or both ends.
There’s probably more nuance hiding in the real world hardware too.
They can also prefer one role, with a mechanism called Try.SNK and Try.SRC (‘try sink’ / ‘try source’).
Basically DRPs toggle back and forth between sink and source until they happen to match up (one side has switched to source and one to sink). If it doesn’t prefer to do the role it’s resolved to randomly, it can switch to the other way and wait a bit - if the other side is fine with it then it will switch too and everyone is happy, if not you can switch back.
We use this for a device that can on-charge a device when it has external power plugged in (in which case we prefer source role) but not when running on battery (in which case we prefer sink but don’t actually pull any power because it’s self powered).
> Any DRP port must have pull-down 5k1 resistors on CC wires (as a sink), AND 10-22-56k pull-ups (as provider), but not at the same time. The DRP then alternates the sink advertising (5k1 pull-downs) with pull-ups (source advertising) about 10 to 20 times per second.
> If another DRP is connected, they both will toggle their advertising until a correct (pull-up - pull-down) combination occurs. Then CC controller(s) will stop toggling, and the end that happens to be in provider mode will provide +5VSAFE VBUS. The process will end in one or other direction, which will happen at random (since frequencies of toggling are independent).
It works surprisingly well in practice. The key thing to remember is that you rarely connect identical devices together.
A laptop and a power bank both support both modes, but the laptop will have a "prefer sink" policy and the power bank will have a "prefer source" policy. As long as you don't connect two laptops or two power banks, it'll work out just fine.
Moreover, it has an override mechanism in case you do connect two identical devices. If you do connect two laptops together for data transferring, the OS should be able to let the user override the power flow direction - or even disable charging altogether.
What are the alternatives, for a mass market standard like USB used literally by everything out there nowadays?
Unplug it and plug it again until it works it's easier for everyone that going to some obscure menu (although maybe smartphones/laptop/console could just display a modal "do you want to charge or be charged?")
Usb historically solved this with ended connectors It was why you had "A" and "B" sides. usb C has an awful lot of user hostile fallout considering it's stated goal of "a cable that just works for everything"
I think to solve it, while keeping all the other goals of usb C would be to orient the charging pins on the plug, not charging the direction you want? unplug then flip one side.
Having a plug that works differently based on the orientation it’s plugged in, feels like it would not quite be „keeping all the other goals of USB-C”.
It could default to charging the device with the lowest battery level. Can't find it but I believe having read years ago Apple does something like that.
There’s a lot more to it, but I attribute a lot of ‘better in some way’ to microcontrast followed by how the lens handles the transition to out of focus detail.
That's so cool on so many levels, and I really enjoyed that indeed, now I have to fight the urge to try to build it myself, good thing it's weekend.
However, it does seem to miss the single most useful feature (for me) which is the resistance part. I understand there is a DC motor controlling the snap points and whatnot, but what I'd like is constant resistance I guess, to a configurable level, rather than snapping to specific points and such.
I don't think it would be possible to hack on top of the already made hardware, but didn't seem like it was already done in the software side of things, although I did skim through things so maybe I missed it.
Sounds reasonable, wonder how that would actually feel in real life? As far as I understand, this would pass through digital parts, adding a little bit of (maybe noticeable) latency, but I wonder if the latency gets high enough for it to be a bit jarring that the resistance is dynamically changing as you apply torque.
In practice, when latency is small enough (on the ~1ms level, which is trivial to achieve using even pretty cheap parts) it's imperceptible.
I sometimes develop control loops for prototype systems which use a motor to emulate a combination of spring + friction damper, and even though I know that my code only runs every 1ms, it's really remarkable how much it feels like a real continuous analogue system.
Another good example is power steering, which uses a motor to remove resistance instead of add it. If I understand it correctly, it senses you applying torque to the steering column and adds proportional amounts of boost - but because it happens so fast, it just feels like the steering is magically lighter.
My dream is a piano keyboard with entirely software controlled mechanical key response. Every key individually mounted on a servostepper. As a bonus it could be used as a fake player piano. Or for practice you could make the wrong keys hard to press. Endless possibilities.
A compromise that is affordable and does exist is programmable response curves to key velocity and aftertouch pressure. It can make sense to have different curves for eg. piano vs harpsichord even if you can’t change the mechanical key impedance.
I haven’t seen it in the wild, but using this you could make the wrong notes quieter/louder or even play a different sound. But I think we all know when we play a wrong note, so the utility might be small.
Just a tangential note to say whenever I see these terms in discussion of MIDI keyboards it reminds me how disappointed I am the vast majority of MIDI controller (and multi-thousand dollar flagship synth) keyboards still don't fully support per note velocity or polyphonic aftertouch. It's only been 40 years kids... (sigh).
I personally disagree. That velocity and aftertouch is all fun modulation input to my eurorack synth. Sure, I have a limited number of polyphonic notes (4) I can support, but it's still more modulation possibilities.
I'm not convinced it would work very well on making you a better player but who knows. Either way, it sounds like a good way to injure yourself. Piano is a very percussive instrument and if you're hitting the keys with any force and they don't give the way you expect them to I imagine that won't be very great for your joints.
A differently complex and smaller approach might be to combine the knob with with an axial flux PCB-BLDC, like what Carl Bugeja made [0, 1]. It might be suited to get haptics in something as small as the article's knob, although to get an in-built display you'd have to use one of those displays that fit in lego bricks [2, 3] with a slip-ring.
Many thanks for the links/references. I don't really care about the display itself (probably prefer without it actually), but never saw those other links before, interesting stuff.
This is what cars need. Only make the entire dial depressable instead of the embedded screen. Use different haptics for each setting so you can feel which setting you’re changing.
Curious which single board(s) would these be? The latest orin nano super cards seem to have updated software.
I have read good things about Nvidia Shield support - it is still the best streaming device out there and gets bug fixes and feature enhancements even for way old builds.
The latest usually have updated software. As they stop being latest the support dwindles and, depending on reliance on proprietary code, so will your ability to maintain it yourself.
Some people have different views on what’s an embedded system, but hardware serial, USB and Ethernet are tablestakes for interfacing with any most industrial hardware or for robotics uses.
I'm always looking for workflow and automation improvements and the new wave of tooling has been useful for datasheet extraction/OCR, rubber-ducking calculations, or custom one-off scripts which interact with KiCAD's S-Expression file formats. However I've seen minimal improvements across my private suite of electronics reasoning/design tests since GPT4 so I'm very skeptical of review tooling actually achieving anything useful.
Testing with a prior version of a power board that had a few simple issues that were found and fixed during bringup. Uploaded the KiCAD netlist, PDFs for main IC's, and also included my internal design validation datasheet which _includes the answers to the problems I'm testing against_. There were three areas I'd expect easy identification and modelling on:
For the first test, the prompt was an intentionally naiive "Please validate enable turn on voltage conditions across the power input paths". The reasoning steps appeared to search datasheets, but on what I'd have considered the 'design review' step it seems like something got stuck/hung and no results after 10min. A second user input to get it to continue did get an output, and my comments: You're leaving a huge amount of useful context on the table by relying on netlist upload. The hierarchy in the schematic, comments/tables and inlined images are lost. A large chunk of useful information in datasheets is graphs/diagrams/equations which aren't ingested as text. Netlist don't include the comments describing the expected input voltage range on a net, an output load's behaviour, or why a particular switching frequency is chosen for example.In contrast, GPT5.1 API with a single relevant screenshot of the schematic, with zero developer prompt and the same starting user message:
So from my perspective, the following would need to happen before I'd consider a tool like this: