On The Construction of Impractical Things For Practical Reasons

  • Engineers have to make decisions during the design process. Nearly all of these decisions are practical ones, and they are the result of certain constraints imposed upon the project by various outside requirements. That’s basically what the bulk of engineering is: solving problems in the presence of practical constraints. I like to refer to these constraints as ‘boundary conditions’ — a term borrowed from differential calculus, but surprisingly apt.

    Differential equations are simple and elegant things. They describe the behavior of phenomena in the most ideal form. In that ideal form they are easy to understand and solve, once you comprehend the language used to express them. But for solving practical problems they are, in and of themselves, a non-starter. In order to make them useful to us, we have to apply boundary conditions to describe the particulars of the situation we wish to know more about.

    The heat equation is a great example. It describes the distribution of thermal energy in some region over time, and it does a really good job of that. But what if we don’t want to just know the general rule for heat distribution in the universe — we want to know how heat will be distributed along a motorcycle exhaust pipe from manifold to tip. Specifically, we want to know what the temperature of the pipe will be at the point where it crosses the rider’s leg, 10 minutes after the engine is started. Enter boundary conditions. The boundary conditions describe the shape and contours of the pipe, and the thermal properties of the steel and chrome used to make it. We then take this mathematical description of the pipe and stick it into the heat equation to it to get the answer to our question.

    Except it’s not as simple as just substituting one function into another and solving for X. There is no single X. The whole damn thing is X. The heat equation is a differential equation, which means the value at one point is based on the values at all the other points over time, so you have to solve the whole thing at once. It gets very involved very quickly. I won’t get in to the nuts and bolts of how this is done — basically you start out solving for a handful of points, and then keep splitting things into smaller and smaller pieces and solving those. It’s a conceptually simple, but arithmetically intense process — a lot of grunt work. (now easily performed by computers, but still).

    The practical constraints of an engineering project work the same way. You start with a general idea of the thing you want to build, and then you have to identify the boundary conditions which apply to that thing. Most projects are defined by at least three fundamental boundary conditions: cost, quality, and time. There’s a very old (and very true) maxim that you can’t have optimal values for all three at the same time — fast, good, cheap: pick two.

    Cost, quality and time are what I call first-order boundary conditions. They apply across every engineering discipline (civil, mechanical, electrical, etc.) and have existed for every piece of technology since the stone axe — long before there were units for money or time. Then there are what I call second-order boundary conditions, which are domain specific. Examples of second order BC’s for an electronics project are:

    • power budget
    • footprint (size and shape)
    • manufacturability
    • regulations (FCC part 15, UL, CE, etc.)
    • operating speed
    • security
    • error budget*

    *- error budget is the amount of deviation from repeatable, perfect operation that you’re willing to accept, dictated by end use. For example, a medical device has a much tighter error budget than a guitar stompbox (I hope).

    This is a non-exhaustive list, but even in this brief offering there are compromises which would have to be made. For example, in modern CMOS microprocessors there is a direct tradeoff between operating speed and power budget. Likewise, footprint and manufacturability can compromise each other: a tiny, complex board with 0201 and 01005 devices on it may be out of the capability for some assembly houses, reducing bid competition and thus increasing cost. Security can also interact with footprint and manufacturability: test points and critical data lines may need to be non-obvious or even concealed in the final product.

    Let’s say someone hires you to build a portable two-way radio — that immediately sets certain constraints on your design. If it’s portable, it will require a battery that must last a certain minimum length of time (power budget), and it’ll have size, shape, and weight limitations (footprint). Since it’s a radio, it will have to pass Part 15 cert as well. Chances are that you, as the electrical engineer, are not designing the enclosure as well, so you’ll be told what the footprint is and you’ll have to make it work. The battery life might well be dictated by the marketing department or the end customer. And so on. In professional work, where you’re hired to build something specific, the conditions are largely dictated to you and its your job to resolve them to each other successfully. Engineering is the political arm of science — the resolution of ideals with realities to achieve a working technology.

    Choose Your Own Adventure

    Sometimes it’s nice to get to choose your own boundary conditions. That’s where personal projects come in to play. I really, really like building things, and I always have hundreds of ideas for new hardware bouncing around in my head, so it’s a nice outlet for that. I also like to couple these builds to some higher purpose: learn about a new technology or verify some hypothesis.

    A few weeks ago I released a project on this blog: the ChronodeVFD wristwatch. I had a few goals in mind when I conceived of it back in August:

    • Make a wearable device that could integrate a non-wearable technology (the display tube).
    • Explore new ideas for making PCBs wearable.
    • Get noticed at parties.

    The last one seems silly, but it’s the truth. I wanted this thing to be functional but still look really cool, because I wanted it to act as a conversation starter at tech meetups and raise my profile for prospective employers — a practical consideration. Naturally, in the initial planning stages, I had to decide what my boundary conditions were — what I cared about and what I didn’t. In the end, I opted for a design which is functional and awesome-looking, but largely impractical to wear on a daily basis. But I wouldn’t need to wear it on a daily basis anyway — it’s an impractical thing built for practical reasons.

    In the debut blog post, I discussed different techniques I used for adapting to the use of the display tube (specifically the roll cage), and in how I made the PCB wearable. Still, I wanted to share a little bit of my decision process in the design of the watch. Specifically, I’ve received a number of inquiries about the power supply, and why I made the choices I did. I figured I’d try to flesh that out a bit. The first half of this article discussed the theory of design boundary conditions. The following second half will cover the praxis of that, focused on a single design area.

     

    Power Budgeting

    Because this was a wearable device, it needed to use a battery. However, the high current draw of the display meant that I couldn’t use a coin cell, due to their high internal resistance. A lot of people have asked me why I didn’t use a lithium-ion battery. I have to agree that, electrically, lithium batteries are ideal. Their nominal voltage, low source impedance, and large capacity would have made things a lot easier in the design stage. Unfortunately, they also occasionally catch fire if they are physically damaged or experience electrical faults. Keeping in mind that this thing was going to be strapped to my wrist, and not protected in any meaningful way from damage, I think it’s obvious why I did not use a lithium cell. LiFePO4 might be a viable alternative, but I haven’t had a chance to test it out yet.

    I decided I would set the design goal of using a single alkaline cell, either AA or AAA (the board has mounting points for both). Working from a AA form factor has a number of advantages: alkaline AA’s are commonly available and reasonably cheap. Since I wasn’t going to wear this thing all-day every day, I figured a 12-hour battery life would be acceptable. And if I decided to change to another battery chemistry in the future, nearly all of them are available in AA (or near AA) sizes. The long, slender shape of the AA also meant that I could place it along one edge of the PCB and have more space available for the rest of the circuit. The choice of a single AA alkaline cell meant that I’d only be getting (nominal) 1.5V from the battery. The logic in the VFD driver chip requires at least 3V — I knew that I was going to require at least one boost converter to generate a higher logic rail voltage from the battery — in this case an MCP1640.

    VFD displays place special requirements upon circuits. I’m not going to get in to the specifics of VFD operation (read this), but suffice it to say that they require two separate voltages in order to operate: the filament voltage and the grid/anode voltage. Filament voltages range from 1.25 – 5V, and grid voltages can go from 12V all the way up to 75V. For the IVL2-7/5 VFD tube used in the watch, the recommended filament voltage is about 2.5V, but it will still work with 1.5V. The recommended grid/anode voltage is ~24V, but it will still work at 13.5V. The filament power is drawn straight from the battery, but I still needed to generate a +13.5V rail for the grid and anode. Enter the second boost converter, an NCP1406.

    Boost converters are pretty cool devices. They let you generate a higher voltage from a lower one and in their modern form they can be pretty compact and highly integrated. One trade-off of generating a higher voltage output, though, is a higher current input. In an ideal conversion, the ratio between the input current and the output current is the same as the ratio between the output voltage and the input voltage (V_o / V_i = I_i / I_o) — 100% power efficiency. So, ideally, if you’re generating 4.5V w/20mA into the load from 1.5V, you’re pulling 60mA from the input. However, the voltage drop across the diode, and the internal resistance of the inductor and switch will cause you to lose some of that efficiency — you might need to pull 70mA to get the same result. 60/70 = ~0.85 = ~85% efficiency. Most boost converters have a ‘sweet spot’ where they hit around 90% efficiency — many of them are optimized to deliver this at some common voltage, like 3.3v or 5v from some common input, say 1.5v or 3.3v.

    The need for two boost converters meant I had another choice to make. I could either try to run both boost converters from the battery voltage, or I could cascade them. Running both from the battery is tricky, because it’s uncommon to find a compact, integrated device that can run from a 1.5V input and deliver 13.5V. It’s possible with discrete components, but it takes up more space, increases design time, and doesn’t really save you any money. So I decided to cascade them. Now I had yet another choice to make: what should the logic voltage be. It could be anywhere between 3 and 5V. I tried 3.3V first, but when the 13.5V converter was activated it caused ripple that would dip below the 3V lower limit. I tried 5V too, but this was very inefficient for the first converter, and the current magnification was too great. I finally settled on 4.5V, which still had ripple but not enough to cause unreliable operation.

    The end product lasts about 12 hours on a single AA alkaline cell, which was the original design goal. Part of the problem with alkalines is that they steadily drop in voltage as they discharge, and boost converter efficiency also drops as input voltage is reduced. After the battery gets below about 1.35V, efficiency drops off a cliff and actually getting the boost converter to output the desired voltage impossible. As such, while the watch will work with alkalines (achieving the design goal), they are not ideal.

    A much better solution was ultimately found with Nickel-Zinc rechargeable cells, which have a nominal voltage of about 1.6-1.65V, and maintain this voltage throughout most of their discharge curve. Boost efficiency increases dramatically, and battery life now extends well beyond 24 hours.