Tales from the lab

Fine, I don’t have a lab. I have a rec-room. I have a desk in the rec-room. Next to the XBox, in between the kids Legos and the imitation fire-place. Still, it has a soldering iron and a scope on it, so it counts.

The UnIC concept calls for a very cramped, tiny PCB, the size of a 40-pin DIP package. This is doable, but not really doable at home. I had to use very very small MOSFET and diode packages; things that are not the realm of home-assembly. They look huge on the screen in the PCB editor, but they’re like dust in real life. Plus my eyesight isn’t what it used to be.

Since I didn’t want to pay for professional assembly – at least not at first – and I wouldn’t have been able to change/rework even measure anything on such a small form-factor, I’ve voted for a larger development version.

This variant uses SOT-23 packages for the front-end circuit and larger packages for some components. It should be within reach for home-assembly.

A word from our sponsor…

I’ve sent out the boards to be made to PcbWay. This was my first project with them so I was really curious how well it will go. My experience was really positive, and I’m not just saying it because they are sponsoring me. Sending in the design files was simple. They have pretty clear design-rules documented on their site so I didn’t have to deal with any DRC feedback from them. The boards showed up amazingly quickly: they spent a few days in manufacturing and another few in transit. In fact, they showed up earlier than the components I’ve ordered from Digikey.

The PCBs arrived in this nice box:

The quality of the end product was really good. I haven’t noticed any miss-registration, the copper pattern seems flawless, solder-mask is nice and sturdy, the silk-screen is clear and sharp. Granted, I wasn’t exactly pushing the technology with this design; it’s a home-project affair.

Bait and switch

I’ve talked about this before how the GoWin FPGAs that I’ve selected for this project are rather expensive at Mouser but I got a shockingly good price by going to more direct distributors. This was the main reason selecting this solution over the Efinix part. Granted those prices were for – low – production runs, but I got a promise that they would honor the same price for prototypes.

Well, it was time to test that offer, so I’ve asked for 10 samples. After some humming and hawing, I eventually heard back from them with a price that was 10x of the original offer! Needless to say, that at that price the Efinix solution would have been way more cost-effective, but now, with the PCBs already fabricated, I was in a bind.

To cut my losses, I decided to get by with only 5 samples, to which the reply was: they won’t sell me anything less than 10 pieces, I should go to Mouser. The source with the outrageously high prices. The source who doesn’t even have the part I selected in stock.

Well, that was a nasty experience, but what can you do? Select a slightly less ridiculously priced, in-stock, pin-compatible variant and order as few as I can get by with. And try to learn from the experience.

Building the thing

After some time, finally all the parts arrived:

It was time to put the board together. I mentioned before how I replaced the insanely small components with larger ones on this devboard. Well, turns out, I over-estimated my capabilities. The SOT-23 packages were easy:

But the 0402 passives? Those things either shrunk over the years or my hands got bigger. After some practice and liberal use of unsavory language, I managed to get them to stick to the board. In the right orientation too!

I only assembled the protection circuit and the power supplies at this point. I didn’t want to blow my hard-earned FPGAs with a misfire on the regulator design. I’ve fired the thing up and …

It worked! Well, sort of. The regulator generating the FET bias voltage output a funky voltage. This should be 4.5V.

Some tooling around revealed that I screwed up the resistor values for the feedback network. No biggie, except of course fixing it involved removing and re-soldering some of those pesky 0402 resistors. It also involved me trying to find the new and improved resistor values, which I didn’t have in that package, so some improvisation was necessary:

Next, I’ve tried applying 12V to the input, just to see what the regulators did. What they did was to let the magic smoke out (at least the 1.2V core-regulator did), taking one of the protection diodes with it.

I still don’t quite understand what went wrong. Replacing the regulator and retrying the experiment showed that 12V input power is indeed OK. The difference between the two trials was that first I’ve used a 12V power brick, the second time a better regulated (and most importantly current-limited) bench power supply. My current theory is that the brick was somehow faulty and really really didn’t like the turn-on transient, sending something significantly higher then 12V down the poor circuit. This in turn fried the regulator, which promptly shorted its inputs, overloading the diode feeding it. It’s as good a theory as any.

Since the release of the board, I’ve learned a few things, so these regulators are in need of some re-design anyway. So, for now, I call it good.

FPGA in the loop

Next up, I’ve soldered the FPGA down. This – again – turned out to be more cumbersome than I remembered: it’s a 0.5mm pitch QFN package with 88 pins and a large center-lug. First I’ve tried using regular solder.

This technique worked fine in the past, but not this time. I couldn’t align the chip, I couldn’t get the solder to stick to the pads. It was a disaster.

Time to change tact: I have some solder paste at home (the very low melting point type). I’ve smothered some of it on the pads (not even trying to keep it on the pads only, just in the general neighborhood). Plopped the chip down and went at it with a hot gun. Now, I must note that when you have solder paste all over your pads, you have no idea whether you’ve aligned the chip on top properly. You are reliant on surface tension to pull the IC to the right position once the solder melted. Turns out, it wasn’t the case. Or rather, the chip soldered down itself one-off in one of the direction.

Damn! More hot-gun action and some poking and coercing later, the chip more or less sit in the intended location. Not perfect, but good enough:

I’ve applied power and was happy to see no smoke other other deleterious action.

JTAG baby!

Hooking up JTAG is the next step. This is simple in theory: I have a JTAG connector and a corresponding programmer I’ve made a long time ago. It’s using an FTDI chip, so it should work well with the GoWin toolset.

I’ve plugged it in and:

Hurray! I can talk to the chip.

Now, the circuitry involved is not all that complicated, or in other words, the feat is not that great, but still, this is a huge step: from here on I should be able to download cores of various kinds and test out the rest of the functionality of the circuit. This is the major milestone, after which incremental progress can be made. Or so I though…

I’ve spend a few hours putting together a simple circuit that drove some outputs with a square wave. This is important because my next step was going to be to test the voltage limiting logic. When I’ve tried to download it to the board though, I’ve gotten this:

Ouch! So, I can talk to the chip, but I can’t program it. That stopped me in my tracks for a while. What could cause such a thing? If I screwed up something about the power supplies or the JTAG connector pinout, most likely I won’t have any communication. What could cause partial operation? Then it hit me: the difference between just querying the chip ID and trying to program it is that the later takes way more bits to do. If I have an unreliable connection for some reason, it could easily manifest itself in this behavior. JTAG, luckily is something that can be slowed down, and slowing it down usually helps with reliability. (This has to do with driving data-changes on one edge of the clock, while capturing them on the other.)

Some screwing around revealed the command line option to set the clock frequency. 6MHz didn’t work, but 2.5MHz did. That’s good enough for now, so I didn’t try to find the limit where things break. And with that, I have functional JTAG!

Logic levels

The next point of action is to validate that the level-shifter works. I did that by making a small external board (really just a bread-board) containing a few logic gates. Both are 74×126 variants, one from the LS, the other from the HCT series. The important difference between them is that – while both operate from 5V supply) the LS series logic can’t really drive anything beyond 3.3V. The HCT series gate however is happily driving all the way up to 5V.

I decided to simply loop the gates back to the FPGA. This way, I could drive my test signal and detect it from the FPGA itself. In fact, I’ve re-transmitted the detected signal again, so I could compare that with the original on the scope:

What you see here is this:

  • The transmitted signal is on channel 1 (yellow)
  • The received signal from the 74HCT126 gate is on channel 3 (blue)
  • The level shifter output (i.e. the signal entering the FPGA) is on channel 4 (green)
  • The re-transmitted signal from the FPGA (and through the level-shifter again) is on channel 2 (purple)

This is neat and confirms that the level-shifter (really a voltage limiter) works and protects my FPGA pins from dangerously high voltages, yes, the 3.6V is a bit high, but don’t forget that my reference is not the same as that of the FPGA: there’s one diode-drop of a difference, or roughly 250mV, which brings it back to what it should be: ~3.3V.

As a side-effect, I also measured the rise- and fall-time of my I/Os. The scope reports about 4ns falling and 5ns rising edges:

In this test, I’ve used the 74LS126 gate. This means that the blue trace actually shows the rise- and fall-times for a standard LS-series gate (blue trace). My logic is faster than that. That of course is not the highest bar of achievement, but shows that this circuit should be able to handle anything that ’80s electronics can throw at it. As an aside, signal integrity on this setup is really atrocious. I have long lead-wires, no nice ground planes on the bread-board. Even the PCB is just a two-layer one, so ground-return is somewhat compromised there as well. As such, it’s not a good environment to really test what the level shifters are capable of: the real thing should be better. It’s just impossible to say by how much.


The last thing I wanted to ring out electrically was the OLED display I intend to add to the package. It’s a tiny, 128×32 resolution affair, something that fits in the DIP-40 form-factor. It supports a few interfaces, I’ve decided to use SPI. But does it work?

Originally I wanted to use my trusty Espresso processor core for this purpose: it was a big reason to create UnIC at all. However, this is where the FPGA shenanigans caught up with me. The core is just a tiny bit too large for the 4kLUT version of the FPGA. It would not have been an issue if I could use the intended 9k version, but now it’s a problem.

What should I do? Do I spend time on getting Espresso on a diet? What would the point be? I certainly would not want to continue using this smaller FPGA, so all that work would be wasted.

Or should I create a custom (HW) sequencer that can bring the display up? That sounds more difficult than writing a C program for a processor. Or should I start bringing up a smaller processor (say a 6502) on UnIC? That in turn sounds a whole lot of work. In the end though both are long-term goals. If I want to create – for instance – a VIC-II replacement in this form-factor, I want some snazzy graphics displayed on the OLED screen and for that I’ll need a HW sequencer. If I want to showcase the capabilities of UnIC as a replacement for old parts, making it act as 6502 or a Z80 is a good demo project.

Neither route is a waste of time, but the first promises payback sooner. So that’s the route I took. In a few days (I also have a life, you know) the code started to take shape. It worked in simulation, so it was time to test it in real HW. I’ve compiled the code (and fought with the tools a little to let me map the functions to the pins I have used on the PCB). I’ve downloaded the code. I’ve let it go… And the screen remained completely black. Damn!

Out with the scope again, hooking it up the pins, checking the signals. Everything looks right. So, what gives? The problem with this display is everything is write only. There’s no way to read any information back. If it doesn’t work, then, well, it doesn’t and it remains – quite literally – a black box.

Some hair-tearing-out later I decided that maybe it needs an explicit reset. Sure, that would be decidedly poor form to design a chip (in this case a display driver) without power-on-reset, but who knows? So, I hooked up the reset to a switch and flipped it a few times before downloading my code.

And, success! Display … displays.

Conclusion and next steps

This is where the project is at the moment. Overall, I would call this build a success: there were hiccups, but nothing major. I didn’t have to add massive amounts of bodge-wires to fix problems, let alone needing another board release.

At this point, there isn’t a whole lot left to do on this development board. The next step is to take all what I’ve learned and release the form-factor board.

So, what did I learn?

I’ll need to go in big and order a large quantities of the FPGAs. Either that or change to a different one. I would consider the second option as a project restart, which I’m not all that inclined to do at the moment. Besides: if this project ever reaches reasonable volume production, the single-unit cost of the FPGA would not be a big deal.

I want to rework most of the power supplies. This is something I haven’t talked about all that much so far, mostly because I realized this between releasing the board and getting the PCBs back. In the current design, I have an LDO generating the 3.3V supply and a DC/DC for the 1.2V core voltage. This is fine for the FPGA alone, but that’s not the only thing on the 3.3V rail. Crucially, there’s an embedded SDRAM chip, which, when active can pull a lot of current. Again, not a big issue, unless we also apply something, like a 12V power to the chip. So, the new idea is to generate the 3.3V with a DC/DC converter and sub-regulate the 1.2V from there using an LDO.

The bias-generation for the MOSFET gates will also change: instead of a fixed voltage LDO, I’m going to move to generating the the actual threshold voltage of the FETs + 3.3V. This involve a 3.3V, fixed output LDO and a 2N7002 FET in the following configuration:

The benefit of this setup is that the gate voltage will track the MOSFET threshold voltage change with temperature. That is, assuming all MOSFETs in the design are on the same temperature, but that’s a fair assumption: they’re close enough together on the same PCB and no large loads exist around them to create hot-spots. They more or less will all sit at ambient temperature.

The biggest hurdle in all this, I think, is going to be dealing with assembly. I won’t be able to put the circuit together at home anymore, I’ll have to find and pay someone to do it for me. Hopefully PCBWay is going to be my solution. Maybe not. Either way, this is my first (home) project where I’ll need to do that, so I’m sure some learning will be involved.

Print Friendly, PDF & Email