EEWeb Pulse - Issue 93

Page 1

Electrical Engineering Community

EEweb.com


+LZPNU @V\Y /HYK^HYL 6USPUL 7*)>LI PZ H MYLL IYV^ZLY IHZLK *(+ HWWSPJH[PVU MVY KLZPNUPUN HUK THU\MHJ[\YPUN LSLJ[YVUPJZ OHYK^HYL ^^ ^ 7*) >LI J V T

:*/,4(;0* *(7;<9,

Copyright 2013, Silicon Frameworks, LLC

7*) 3(@6<;

+0.0 2,@ 05;,.9(;065

+LZPNU T\S[P ZOLL[

9V\[L T\S[P SH`LY

9LHS [PTL PU[LNYH[PVU

ZJOLTH[PJZ ^P[O V\Y

IVHYKZ ^P[O Z\WWVY[

^P[O +PNP 2L` JH[HSVN

MHZ[ HUK LHZ` [V \ZL

MVY JVWWLY WV\YZ HUK

HUK IPSS VM TH[LYPHSZ

^PYPUN [VVS

KLZPNU Y\SL JOLJRPUN

THUHNLY

PC B Web.com


EEWeb PULSE

TABLE OF CONTENTS

4

Paul Darbee FOUNDER OF DARBEEVISION A converstion with this serial entrepreneur about his career in developing ways of viewing video that far surpasses traditional fidelity standards.

Featured Products

12

A History of HF Radio Receivers

16

BY RODNEY GREEN WITH MULPIN An introduction to the interesting and challenging art of high frequency radio design and why this seemingly outdated technology could be of use to modern day engineers.

Alternative Energy Vehicles: Energy Storage and Supply

24

BY ALEX TOOMBS Why gasoline vehicles are not a long-term solution for our transportation needs.

Design & Analysis of TDC Converters in CMOS Technology - Part 1

30

BY UMANATH KAMATH WITH CYPRESS An overview of various topologies for realizing a time-to-digital converter.

RTZ - Return to Zero Comic

36 Visit www.eeweb.com

3


EEWeb PULSE

P

aul Darbee is a serial entrepreneur who has spent his career developing ways of viewing video that surpasses traditional fidelity standards. He has had to wait over 40 years for the world’s technology to catch up with some of the discoveries he made back in the 1970s. We spoke with Paul Darbee about his extensive background in video synthesizers, his current project, DarbeeVision, and how Darbee Visual Presence is changing the face of video entertainment as we know it.

4

EEWeb | Electrical Engineering Community


INTERVIEW

Visit www.eeweb.com

5


EEWeb PULSE How did you get started in the industry? One of the main things you need to know about me is that I’ve been around for awhile—I’m 66 years old right now, much to my surprise. I’ve been a serial entrepreneur for all of my life, and now, later on in my life, some of the things that I did earlier are coming full circle. My current project, DarbeeVision, is one of them. I actually had to wait over 40 years for the world’s technology to be able to support some discoveries I made back in the 1970s. In between that time, what I’m probably best known for is inventing and creating the preprogrammed universal remote control in the 80s. I’m a founder of Universal Electronics (NASDAQ: UEIC), the company that now dominates that field. They have sold well over a billion units by now and they are all over the world, so they are pretty much the standard for remote controls. One thing that I certainly learned from my experience there is the reality of taking something from the glimmer of an idea to a global phenomenon—it’s not for the faint of heart. The actual history of that particular company is now kind of lost in the mists of time. It is seen as an overnight success, but I assure you, it was anything but that. It took us at least two years after we had a product developed before we could

get any traction on it at all. Once we did, things did move pretty quickly. Personally, I learned a tremendous amount about the realities of making a widely distributed consumer electronics product. People tend to think that consumer electronics is just a bunch of junk and very forgiving, but in reality it’s quite the opposite. It goes in very high volume, so it has to be extremely high quality in order for the failure rates not to mount up to quantities that can’t be handled. In 1996 I left the company to try to attempt to productize a networked remote with a screen on it, but it didn’t work out, even though it was part of the dot-com boom. It was still a little too early to do that kind of thing. Now, of course, we do have a third screen and it isn’t the remote control, it’s the cellphone. In 2000, I went back to something that I had discovered back in the early 70s while I was working at the first company I founded, Optonics. It was here that I had my first patented invention that used fiber optics as pixels in a display. The idea was to array the fibers on the display in a grid pattern and then take the fibers on the other end and arrange them in a line. Underneath the line runs some film transverse to the line, and you shine a light into the fibers so that the film with the stripes can modulate each individual fiber. The film runs slowly and continuously and yet displays an animated picture on the screen.

“Back in 1972, we were watching 3D TV. It was kind of interesting, but I didn’t think there was much that you could do with it.” 6

That worked—you could see it in broad daylight and it looked, for all the world, like the LED boards that we see so commonly now at car dealerships and

EEWeb | Electrical Engineering Community

all over Las Vegas. I was able to work on a video synthesizer as well, which was my true passion. I made interesting video that we recorded and put on these displays. The video synthesizer was an analog device—which we laugh at now—but one property that analog has is that it’s all in real-time, and it’s really fast. This video synthesizer worked pretty much like an audio one—it’s effectively a big matrix switch, and instead of processing audio signals, it’s processing video signals. What you matrix together are various processing modules—multipliers, adders, filters, deflection amplifiers, etc.—that allow you to control not only the colors and pixels, but also their placement. With that lashup, I was able to do a large number of experiments with video, some of which were fairly forward-looking and involved two cameras—a left and a right camera. With that, we could create 3D stereo video. We immediately used the synthesizer to color one of the images blue and the other red; when you looked at that with blue and red glasses, you got the classical anaglyph stereo video. Back in 1972, we were watching 3D TV. It was kind of interesting, but I didn’t think there was much that you could do with it—it worked about as well as the stereo TV that we have now, and that wasn’t really what I was interested in. What I wanted to see was if I could find a way of taking the information that’s in the difference between the two images—the ocular disparity—and see if it is possible to process that information in such a way that you could put it into a single, two-dimensional image that could be seen without any glasses. I was hoping it would become a twodimensional image that you could smuggle depth cues into. With the synthesizer, I was able to try a whole lot of different things and it didn’t take very long to stumble on a good


INTERVIEW

Darbee Visual Presence

Standard Definition Image

way of processing the images, which turned out to be pretty simple. If you used the lens on one of the cameras to defocus the picture—say, on the right image—and then used the synthesizer to subtract the blurred image from the other sharp image, it works! It was the most amazing thing. Depending on what the cameras were looking at, especially natural scenes like trees, it was just miraculous. It would sometimes give you a depth effect that could be interpreted as a 3D image. It was a stunning effect and I thought it would be a wonderful thing to somehow bring to the world. However, the necessary digital processing to perform the needed convolutions for this goes up enormously fast, and for serial processors that’s almost a non-starter. For that, and

many other reasons, I just put it on the shelf as a curiosity and did other things like the universal remote control and some other companies that did machine-vision controls. How did your extensive experience help you start DarbeeVision?

derlying mathematics, and in the digital world everything was very stable. You could try things out and tweak different parameters and play with the convolutions. I quickly learned the dark side of this underlying defocusand-subtract algorithm—it can do bad things in an image just as easily as it can smuggle in the depth cues.

The first thing that I did was to recreate the analog video synthesizer as a digital program. It was about 500 times out of real-time, so it took many seconds to process a single frame. It was slow, but pretty! I was mostly downloading stereo frames that I could find on the Internet. I also created a few myself and that let me see afresh what the subjective effect looked like, which was glorious. I could also work in detail with the un-

Over the preceding decades, I studied a lot of math and digital signal processing and learned the background of what other people had done along these lines in computational imaging. One thing that I knew was that merely defocusing an image and subtracting it from itself wasn’t anything new. That was discovered by photographers early on and it’s called an unsharp mask. It is a very well-known highpassing edge enhancement filter. Visit www.eeweb.com

7


EEWeb PULSE “Our goal is not to be in 200 movies a year, but to be in 200 million TVs as well as the Blu-ray players, game consoles, and mobile devices.” What was innovative was that we aren’t taking the same image and subtracting it from itself, but we’re working with two different images. It is true that working with a stereo picture, there are going to be places where the images are converged. In that part of the picture, yes, it does degenerate into an unsharp mask, but that’s normally only a small part of the picture. If you are looking around the object, say somebody’s face, the part where it’s actually converged is really just a tiny part of the image; generally, the image is not converged. For instance, if you are focused on somebody’s mouth and you are converged there, the tip of their nose is not converged at all—the left and right images are different from each other. If you do the defocus-and-subtract where they’re different, you have now done something that is not an unsharp mask, but something new. Because of the novelty and the inventive height, I knew that you could file for a patent on the underlying defocus-and-subtract, and so I did. However, I didn’t have a show-goer until I could solve the problem of starting with a single monoscopic image. I did have an advantage in the fact that I knew that when I created the other image, all I was going to do with it was defocus it, in order to subtract it from the sharp incoming image. That made the task a little more forgiv-

8

ing, and I realized that I didn’t have to create as good a picture as you would need if you were trying to do a 2D-to-3D process. These days, there are 2D-to-3D processors that come inside most 3D TVs. There are also after-market boxes that you can buy that can take a 2D video picture and turn it into 3D. Now I had to grapple with the real problem: what to do with the unnatural parts of the image that the defocusand-subtract creates. I already said that where the image is converged, it degenerates to an unsharp mask. People don’t use unsharp masks much because they create outlines that appear as artifacts around objects, and they also emphasize noise. There might be a sort of Hippocratic oath in image processing: ‘Above all do no harm.’ Or you might say, ‘One person’s enhancement is another person’s artifact.’ We aggressively alter the picture in order to embed depth cues. The luminance of a pixel might go up or down to achieve that effect. The challenge is to only modify those pixels that are of interest in the image. Seeing is mostly discarding information—you only pay attention to what is salient in your visual surroundings. So, I looked at how attention-casting in the visual cortex might work and tried a few things mathematically, and voila, the Perceptor was born.

EEWeb | Electrical Engineering Community

Perceptor is just a name I made up for what is really a bag of tricks for conditionally applying defocus-andsubtract in real-time to an image. The Perceptor is our real secret sauce. It is not patented—we keep it as a closely-held trade secret. What was the next step in promoting this product? With a working software – dubbed the ‘Darbonizer’ – in hand, that was the beginning of the decision to try to monetize this and get it out there. The next step was to start showing it around and seeing what people thought of it to see if it was worth pursuing. The responses were unanimous. It was very clear that the difference between the images was profound and that it wasn’t something you could get from high-definition alone. A lot of the technology behind the saliency maps, defocus-and-subtract, and attention casting actually comes from neurophysiology and brain theory—it essentially mimics what our brains already do when viewing images. In that sense, you go beyond even what a mere camera can do, even a perfect camera. One of the buzz phrases we use is that we go Beyond Fidelity™. How does Darbee Visual Presence™ work? Well, the next thing I wanted to work on was not just running images through this process to get a final result—what I was more interested in doing was developing it the rest of the way so that it could be built into the TVs at the end of chain. There was another really strong reason for doing that—an economic reason. When you add Darbee Visual Presence to a picture, you make the file bigger by adding information to it. It makes the file anywhere from 15 to 25% larger. That goes in the opposite direction of what people need to do


INTERVIEW with compression. The demand for compression is driven by economics not only because you want to put more media on a disk, but much more importantly, because you want to squeeze more media down a pipe. The reason that we have as much digital video as we do is because of compression. The fact that compression is out of the hands of the engineers and into the business guys’ hands means that they are turning the compressors up beyond what they really can comfortably handle without generating artifacts.

Where we want to be in the chain is at the very end inside the TV and before the LCD screen. That’s our sweet spot. Our goal is not to be in 200 movies a year, but to be in 200 million TVs that get made as well as the Blu-ray players, game consoles, and mobile devices—or just anywhere at the end of the chain. It is a permanent place for hardware and firmware that will do an improved job of restoring images that have been stepped on by compression. The end of the chain is a long-term, viable niche for computational imaging.

take this out of the software regime where it takes a second and a half to process a frame and process 60 frames a second instead? How were we going to build a digital human that will recognize the image and see what’s wrong with it and turn a knob to fix it? We knew the images could be fixed, but we needed an algorithm to do that, which turned out to be very difficult. With an enormous amount of work, we knocked those problems down one by one—but unfortunately, I can’t tell you how, because that remains our trade secret.

To video purists, they can’t stand to watch stuff that’s been sent down a cable, much less Netflix or YouTube. The purists need to start with Bluray or even 4K upscaling. In reality, though, most people don’t really care much about image quality, especially those of us that are old enough to remember the old “Hopalong Cassidy” black and white TVs where you had to bang on the side and move the rabbit ears around to get rid of the horizontal tear. We don’t examine pixels, we watch TV.

Where does DarbeeVision fit in among HDTV and Blu-ray discs that promote image enhancement?

What is the Darblet?

Media distributors need to use compression. That’s a long-term economic driver, which means that there’s going to be a continuing arms race between wanting to compress more and needing to do a better job of image repair during decompression. That translates into job security for people like us who are in the business of image processing. It was not a goal of DarbeeVision to do image improvement in the sense of fixing up decompression or getting rid of other artifacts that might accrue in the production and transport of an image. We also were not in the business of scaling or modifying the colors or de-noising the compression artifacts. Instead, we assume that common video processing will be done by equipment that comes prior to us.

We’re unique in what we do with DarbeeVision and we don’t attempt to compete with the major labs that are doing image improvement—we’re kind of complementary to that. It is our goal to be built into the billion transistor chips that are already in TVs. We know, from experience with the remote control, that getting to those companies doesn’t happen right away. I thought that our product was just too good and that it needed to get out there, starting as a real-time hardware accessory. But how were we going to

The Darblet is our after-market accessory product. About the size of a deck of cards, it has one HDMI input and one HDMI output. The Darblet has three viewing modes, along with a Darbee Level control, all settable by the user. One of the modes is Hi Def, which pairs with high-end equipment and high-quality video for the best results. The “setand-forget” for high-end is Hi Def mode with a Darbee Level setting of around 50 or 60. Everybody, even the harshest critics, has given it universal accolades. Another viewing mode is Gaming, which works especially well with video games and computer-generated films. If the image is computer

DarbeeVision Darblet™ Visit HDMI Video Processor www.eeweb.com 9


EEWeb PULSE

“We’ve been getting unanimous accolades from purists who are perfectly satisfied with our mode varieties.” generated, then unless the creators deliberately put noise into it, there isn’t any. That means that we can use a different set of artifact removal tools to give users the option of putting it into Gaming mode. We call the third mode Full Pop. With this one, you probably are going to see artifacts in the content—most notably in text. You’re going to see some things that, if you were going to do a before-andafter, you might notice a few things added on top of the image. This is not really the setting for the purists, but it works really well for lower-quality video. We got the Darblet working in realtime in 2010. We hired our “boy genius” engineer who helped me do that and, in fact, he can do many things that I’m not capable of doing. One of the things he’s a whiz at is graphics cards and FPGAs. He first got it running in real-time on an nVidia graphics card using nVidia’s CUDA supercomputer parallel processing. Not only had we successfully gotten rid of the knobs, which we developed in software, but with CUDA you could run it in realtime on practical hardware. We knew that we really couldn’t run in the field on such graphics cards because we used all the graphics card resources. But in fact the goal was not to get it

10

to run on a graphics card, it was to get it running on a single chip. The next thing to do was to get it onto an FPGA. We were able to do the port to an FPGA in just a matter of a couple of months. It was very clear after we did the first bit of work that it was all going to fit on a low-end consumer version of an FPGA chip. In the intervening time between our first accomplishment of getting it to run and now, we’ve improved the resource utilization of the algorithm to the point where we use an astonishingly small number of resources. We use only 700K bits of RAM and we use about 80,000 gate equivalents, which is 17,000 logical elements (LEs). That’s all we use. We don’t use a processor, we don’t use a digital signal processor (DSP), and we don’t use any external memory like a frame buffer. Our total delay through our pipeline is only 200 microseconds. By processing the image in real-time with that small number of resources, it’s highly practical. We’re in the process now of taking the FPGA and turning it into an application-specific integrated circuit (ASIC), which will bring the price down enormously. In about a year we should be able to offer aftermarket products at better consumer prices. More importantly to us, we

EEWeb | Electrical Engineering Community

can now offer it to third parties—that is to say, OEMs that want to build it into their Blu-ray players and TVs, and private-label outfits that want their own brand of accessory product. Is the Darblet available to purchase? The Darblet is on sale now and has been since May of 2012. It uses an FPGA, which is not a cheap chip, as well as two other HDMI chips and a microcontroller supervise the HDMI. So three of the chips are there just to handle the HDMI and our algorithm is done completely in one chip without any external components whatsoever. What we have now is a four chip product, which, because of its bill of materials, is selling at $319 after it goes through distribution. We don’t sell it direct. We have dealers all over the world now, and we are getting some viral attention because when people first get a Darblet they show it to their neighbors and we get extra sales from word-of-mouth. We’ve been getting unanimous accolades from purists who are perfectly satisfied with our mode varieties. They’ve been essentially saying what I’ve been saying all along—when you look at good pictures and say to yourself that they can’t be improved any more, but strangely enough, they can be. People are noticing that now through our Darblet. ■


INTERVIEW

Online Circuit Simulator PartSim includes a full SPICE simulation engine, web-based schematic capture tool, and a graphical waveform viewer.

Some Features include: • Simulate in a standard Web Browser • AC/DC and Transient Simulations • Schematic Editor • WaveForm Viewer • Easily Share Simulations

Try-it Now!

www.partsim.com Visit www.eeweb.com 11


Technology You Can Trust

Take the Risk out of High Voltage Failure with Certified Avago Optocouplers IEC 60747-5-5 Certified

Optocouplers are the only isolation devices that meet or exceed the IEC 60747-5-5 International Safety Standard for insulation and isolation. Stringent evaluation

tests show Avago’s optocouplers deliver outstanding performance on essential safety and deliver exceptional High Voltage protection for your equipment. Alternative isolation technologies such as ADI’s magnetic or TI’s capacitive isolators do not deliver anywhere near the high voltage insulation protection or noise isolation capabilities that optocouplers deliver. For more details on this subject, read our white paper at:

www.avagoresponsecenter.com/672


FEATURED PRODUCTS Quad-Frequency MEMS Oscillators Integrated Device Technology, Inc. announced the industry’s first highperformance quad frequency MEMS oscillators with multiple synchronous outputs. IDT’s latest oscillators offer configurable outputs in an industrystandard compatible package footprint, saving board area and bill-ofmaterials (BOM) cost in communication, networking, storage, industrial, and FPGA applications. The IDT 4E series ±50 ppm enhanced MEMS oscillators integrate an LVDS or LVPECL output with a synchronous CMOS output into a single package, eliminating the need for an external crystal or secondary oscillator. For more information, please click here.

Excellent Dimming LED-Driver ICs Power Integrations introduced its latest family of LED-driver ICs, aimed at consumer, commercial and industrial lighting applications. The new LYTSwitch™ IC family delivers tight regulation and high efficiency for tube replacements and high-bay lighting, while providing exceptional performance in TRIAC-dimmable bulb applications. LYTSwitch ICs combine PFC and CC into a single switching stage, increasing driver efficiency to more than 90% in typical applications, delivering a power factor greater than 0.95 and easily meeting EN61000-3-2C requirements for total harmonic distortion (THD). For more information, click here.

ADC in Family of High-Speed CMOS Converters Fujitsu, the market-leading provider of high-speed data converters, announced the first in a new family of 8-bit, power-efficient, 28nm CMOS converters. The analog-to-digital converter (ADC) addresses the need for large-scale global deployment of single-wavelength, 100Gbps optical transport systems, and provides a solution for future short-range optical and backplane interconnects. Now in its third generation of process technology, the 28nm ADC supports sampling rates from 55 to 70 GSa/s (billions of samples per second) with scalable analog bandwidth. The new ADC, which is based on Fujitsu’s proven CHAIS architecture, will be shown for the first time at the OFC/NFOEC conference, March 18-21, in Anaheim, California. For more information, please click here.

83mm Phase Control Thyristors IXYS Corporation announced that its wholly owned UK subsidiary, IXYS UK Westcode Ltd., launched a new addition to its 83mm die phase control thyristors product group. The thyristors, rated at 2200 volt, are the latest introduction to this new product group that uses an integrated die construction and improved package design for better electromechanical and thermal performances. The 2.2kV thyristor has an average current rating (Case temperature 55C) of 4340 amperes, with a junction to heat sink thermal resistance of 0.008 kelvins per watt. The new thyristor is constructed using an all diffused silicon wafer slice fused to a metal header and encapsulated in a fully hermetic package. The device is available in two industry standard capsule heights and two different voltage grades. For more information, please click here. Visit www.eeweb.com

13


FEATURED PRODUCTS Single Platform GUI for PMBus Power Product The PowerNavigator GUI offers a single software platform for all PMBusenabled Power Management products at Intersil, meeting all user requirements with complete design editing from design start to hardware implementation. The PowerNavigator software allows simple configuration and monitoring of multiple PMBus-based products using a PC with a USB interface. It can monitor large systems through dashboard views, allows for easy debugging during prototyping, and provides comprehensive visualizations and power-centric intelligence such as sequencing and fault management. For more information, please click here.

DSI to Single-Link LVDS Bridge The SN65DSI83 DSI to FlatLink™ bridge features a single-channel MIPI® D-PHY receiver front-end configuration with 4 lanes per channel operating at 1Gbps per lane; a maximum input bandwidth of 4 Gbps. The bridge decodes MIPI® DSI 18 bpp RGB666 and 24 bpp RGB888 packets and converts the formatted video data stream to a FlatLink™ compatible LVDS output operating at pixel clocks operating from 25 MHz to 154 MHz, offering a Single-Link LVDS with four data lanes per link. The SN65DSI83 can support up to WUXGA 1920 × 1200 at 60 frames per second, at 24 bpp with reduced blanking. It is also suitable for applications using 60 fps 1366 × 768 / 1280 × 800 at 18 bpp and 24 bpp. For more information, please click here.

Fiber Optic Receiver for 50 MBaud MOST AFBR-2013 Receiver are designed to receive up to 25MBit/s optical data which are biphase coded (up to 50Mbaud). They are packaged in 4-pin transfer molded, low-cost packages ready for assembly into MOST® plastic fiber optic connector receptacles. Output data has TTL switching levels, compatible with MOST® Network Interface Controller ICs. These optical components are specified for operation over a -40°C to +95°C temperature range, and reliability requirements of automotive applications. It is allowed to process the AFBR-2013 devices with reflow soldering. For more information, please click here.

Next Generation Low-Power LTE Renesas Mobile’s latest single-chip multi-mode TD-/FDD-LTE modem, the SP2532, as well as SP2531 based devices, will feature in several demonstrations that highlight the significant power and performance advantages it presents. One demonstration will compare the Renesas Mobile modem against a recent competitor’s product highlighting that, with a 2 mA/Mbit power consumption, the SP2532 delivers the industry’s best power consumption figures at a sustained 150 Mbit/s throughput. When combined with its tiny footprint, which is 45% smaller than the previous generation modem, it sets a new competitive threshold others must beat in the race to deliver the best LTE modem for tablets, super- and smartphones, as well as routers and datacards. For more information, please click here.

14

EEWeb | Electrical Engineering Community


FEATURED PRODUCTS

Portability & Power All In One...

Debug digital designs on an iPad, iPhone, iPod. Can a logic analyzer be sexy? Watch the video and weigh in...

Logiscope transforms an iPhone, iPad or iPod into a 100MHz, 16 channel logic analyzer. Not only is it the most intuitive logic analyzer available, the triggering is so powerful you’ll be able to count the hair on your bug.

See why our innovation keeps getting recognized. Visit www.eeweb.com

15


EEWeb PULSE

A History of

HF Radi Receive Why We Need Rodney Green

Mulpin Assembly Technology

16 28

EEWeb EEWeb| |Electrical ElectricalEngineering EngineeringCommunity Community


io ers:

it Now

TECH ARTICLE

This article is intended to introduce the beginner to the interesting and challenging art of high frequency radio design. It is also intended to entice the seasoned professional to take a deeper look at this art and think outside the box, which can bring both small and large improvements to the technology. For instance, there are many things that can be tried today which were impractical a few decades ago. The way forward can be improved by knowing what has gone before us and, in some instances, what has been missed. I’ve spent more than 40 years experimenting and designing both new and rarely tried concepts — with the aim of improving the art form. Some of the results have been quite spectacular, and links to some of my work are provided below. This first article is limited in technical content, but as the series progresses more technical information will be included, along with references for study material. Radio engineering is a challenging discipline, and these articles will not be an entire radio course. Hopefully, however, they will point the way forward for those who wish to make the journey.

Visit www.eeweb.com

29 17


EEWeb PULSE

Another reason to be familiar with the history of radio receivers, is that there are holes in technology along the way down the historical path — often more than you might think; there are areas which have been under-developed due to reasons such as impracticability at the time of the design. The size of components, for instance, might once (in the days of early valve radio receivers) have been too large or too expensive for a certain design, but might now be much smaller and cheaper. Development in the components and other design elements opens the door for these holes in technology to be filled, and elements from old designs to find a place in modern and future radio receiver designs. Other early designs did not “take off” due to the lack of technical knowhow, and lack of components (such as sharp crystal filters) which exist today. One good example of this is what was known as the “Superinfragenerator” concept. 1 2 Though this concept offers a simple and effective mode for digital communication, it was never taken up again after its development. Now that we’ve discussed why it might be important to look at the history of radio ideas, let’s examine some early developments in radio design.

Early Radio Development Selectivity Selectivity is the ability of a radio receiver to sort out one frequency from another, and for the transmitter to keep the signal within the confines of what is needed for the receiver to recover the original signal. Figure 1 is a graphi-

18

0 -10

Response (dB)

The description of a modern receiver is quite different from that of a receiver of thirty years ago. The philosophy of what is important in a radio receiver is constantly changing. This is due to changes in technology which enable ever better performance. It seems to me to be a good idea to see why that is so, and a good way to do that is to delve into the history of radio receivers. Knowing the history of radio receivers can make allows you to see how and why things have changed with time, and allows a designer to better spot a good design. It may also be useful to use parts of older designs to complement a new design. For instance, oscillators, which are used in most radio receivers have changed over time to the point where noise generated within is likely to shift the design philosophy for a new generation of radio receivers. This will be covered in a future article.

cal representation of the selectivity of cascaded tuned circuits. Selectivity was an early issue to be conquered by the budding wireless industry. This was because all stations within listening range at that time would interfere with each other because they all broadcast on a very wide frequency band using spark transmitters. See the black trace of Figure 2. This severely limited the usefulness of the new wireless technology.

Single Tuned

-20 Double Tuned

-30

Triple Tuned

-40 -50 -60 6.0

7.0

8.0

Frequency (MHz)

Figure 1: The cumulative effect on selectivity of one, two, and three tuned circuits. The Y axis is the amount of attenuation on a signal which passes through a tuned circuit. Notice that on the resonant frequency, very little attenuation occurs. However outside of the resonant frequency, a signal is attenuated more when more tuned circuits are cascaded. Each may be separated by an amplifier.

Average Output Power 10dB/Division

Why look at history?

2

10

20

Frequency (MHz) Figure 2: Spectrum of a 4.5 MHz Spark Transmitter. In black and of a clean signal from a valve transmitter in green. Output power is shown on a log scale, and is relative.

One of the causes of the loss of so many lives aboard the Titanic was the lack of selectivity of its radio receivers and in fact to the broad frequency spectrum of the transmitted signals of that time as well. Titanic’s radio operator was trying to receive telegraphic signals from New York,

EEWeb | Electrical Engineering Community


TECH ARTICLE

but other ships were sending iceberg warnings. The two signals could not be separated, which resulted in Titanic’s operator asking the offending ship station to shut down its warnings, which they did and went to bed! This prevented that nearby ship from hearing the CQD (or SOS) from Titanic. The rest, as they say, is history.

TRF (Tuned Radio Frequency) Receivers Triode radio valves were invented by Lee De Forest in 1908, and within a few years, radio valves began to replace the earlier broad band spark transmitters and with that, cleaner signals could be transmitted, as shown in the green trace of figure 2. Tuned circuits, isolated from the antenna could also to obtain good selectivity and be used along with valves as amplifiers and detectors in receivers. Thus it can be seen that selectivity is important in both transmitters and receivers. The first radio receivers of interest to us were the early Tuned Radio Frequency (TRF) receivers. Ernst Alexanderson patented the design in for TRF receivers in the 1916; the receivers used one to three stages of amplification from the antenna, each separated with one or two tuned circuits. Figure 1 shows the effect of cascading up to three tuned circuits. Multiple tuned circuits increase the selectivity of a receiver, as the selectivity is cumulative. These circuits were tuned individually with an amplifying triode between each one, and each had a logging scale (usually 0 to 100) printed on the tuning dial. Tuning a station was tricky as each of the tuned circuits

Figure 3: TRF receiver circa 1925. Each of the three large tuning knobs control a tuned circuit as described in the text.

needed to be on the same frequency, and it was easy to miss a weak station by tuning one of the tuning knobs too far off from that of another stage(s). The system worked however, and with skill it was possible to listen to a wide variety of the experimental stations at the time. Figure 3 shows a picture of a typical TRF receiver circa 1925. In the early days of wireless, the lower frequencies were preferred, as it was thought that the distance covered by a signal increased as the frequency was reduced via what is known as the Ground Wave. For that reason, the Radio Amateur Service was allocated the higher frequency

Marconi’s Maiden Voyage The Titanic’s sister ship, the Olympic, also had a Marconi room with a 5-kW plain spark installation. The Olympic was actually the first vessel to be serviced by the Marconi Compay back in June of 1911. The Olympic’s Marconi Room and unknown operator are pictured to the left.

Source: www.marconigraph.com

Visit www.eeweb.com

19


EEWeb PULSE bands, as they were thought to be near useless for long range communication by Government agencies. It was the amateurs who discovered that as the frequencies went up, the signals returned to the earth hundreds or even thousands of miles away from the transmitter not by the ground wave, but by being bounced off the ionosphere. Now as the frequency rose, they discovered, a tuned circuit of a given design became less selective, which made it necessary to increase the number of stages with tuned circuits to get the desired selectivity. This was clearly unsatisfactory as an even larger number of tuning dials was unworkable in practice. To combat this, ganged tuning was developed, In ganged tuning, one dial was used to control all of the tuned circuits together.

Sensitivity The earliest radio receivers had to work with signals plucked from the air without any amplification at all. To add to this problem the early detectors (called coherers) were quite insensitive, and required very large antennas. More sensitive crystal detectors made from galena or carborundum were also common at that time, but their poor reliability made them unsuitable for important uses such as shipping and the military. The introduction of the triode valve meant that signals could be received from stations at a much greater distance from the transmitter. With transmission itself, a purer transmitted signal meant greater efficiency by not transmitting signals outside the band width required by the receiver. This meant that more stations could be on air at the same time on different frequencies without interfering each other. Triode valves were capable of considerable amplification of radio signals, and even more so at audio frequencies. The sensitivity of radio receivers sky rocketed to unprecedented levels. The regenerative receiver was developed, in which a small amount of the amplified radio frequency energy is fed back around one stage. This had the effect of greatly increasing the selectivity of a tuned circuit and amplification which in turn increases the sensitivity of an individual stage of a radio receiver. The selectivity however, still varied with the operating frequency, and regeneration was not the final answer to selectivity. The most desirable aspect of the selectivity of a receiver is that it remains constant with frequency. The superheterodyne receiver with this advantage was invented during and directly after the Great War, and was patented in 1918 by Edwin Armstrong.

20

The superheterodyne receiver was somewhat more complex in design than previous types, but the advantage of constant selectivity with changing frequency made this type of receiver last to this day. The TRF and regenerative receivers were vast improvements over what went before. The introduction of tetrode and pentode valves a little later also meant that fewer valves were needed to make a radio receiver because these new tubes had a lot more amplification than earlier triodes. The uptake of the superheterodyne receivers was quite slow in the 1920s due to the reluctance of manufacturers at that time to pay royalties to the inventor. In time it became apparent however, that if a company was not producing the superior superheterodyne receivers, they would go out of business. There is much that can be learned from this even today. For decades both sensitivity and selectivity were the two most important issues related to receiver design.

Other Early Receiver Types The Super Regenerative Receiver The super regenerative receiver was also invented by Edwin Armstrong in 1922, and was the most-used re-

Edwin Armstrong

EEWeb | Electrical Engineering Community


ceiver above 50 MHz. The reason for the use of the super regenerative receiver above 50 MHz was that this type of receiver radiated noisy signals from the antenna. This was a nuisance on the populated HF (lower frequency) bands. Above 50 MHZ was still new ground; it was not used much and so the bands were wide enough for operators to just find a quiet frequency. It is also quite broad in its selectivity, however, and was eventually relegated to higher frequencies where frequency stability is less important. Still, super regenerative receivers have one giant advantage over other receiver types, and that is that a one valve receiver can hear signals right down to the background noise level. With the high price of valves in the early days, a one-tube receiver was a huge advantage, and super regenerative designs are still popular today in radio control toys, garage door openers, and some data receivers.

The Fremodyne This type of receiver was developed in the 1940s and consisted of a combination of superheterodyne and super regenerative techniques. They were used on the VHF bands and had very good sensitivity — but also poor selectivity. In 1970 the receiver saw a revival with an Australian solid state design.3

The Superinfragenerator This receiver enjoyed a very short life indeed. The superinfragenerator combined the selectivity of the superheterodyne designs and the sensitivity of the super regenerative receiver designs. The first part of the receiver however, had problems with what is known as receiving an image frequency, which allows the receiver to listen on both the desired frequency and on an undesired image frequency. The reasons for the existence of this image frequency and how to avoid it will be discussed in Part 2 of this series.

TECH ARTICLE

About the Author Rodney Green was (according to his parents) interested in things electronic since he was two years old. At about 17 years of age he joined the Australian Post Master Generals Department (PMG) as a trainee specializing in radio communication. He spent much of the following twenty years maintaining both high power broadcast and television transmitters. He also specialized in radio telephone servicing and designed ancillary equipment for the sites at which he worked. At that time a radio telephone was the size of a small suit case. Rodney’s hobbies include amateur radio, and he has the call sign VK6KRG. Within that hobby he designed high performance radio receivers and transmitters for use in the amateur bands, many of which have been published in the cutting edge radio experimenters’ magazine QEX and other publications internationally. When time permits, this hobby continues to this day. After retrenchment, Rodney was re-employed by Australia’s largest manufacturer of radio studio broadcast equipment, PKE, as the personal assistant of its founder and chief engineer Poul Kirk. Here he designed schematics and PCBs using the early CAD programs of the time. From PKE Paul Clarke moved to Barrett Communications, as the RF engineer. He held this position until a fully qualified RF engineer could be found, but he continued designing under his direction until he retired from Barrett Communications in 2005 to work on his Dirodyne Radio architecture and other inventions, such as the Gen 1 Mulpin concept (which has US and international patents pending). ■

» CLICK HERE to comment on the article.

References 1.) QST Magazine December 1935, 2.) The ARRL handbook 1936 Page 257. 3.) Electronics Australia, May 1970

Visit www.eeweb.com

21


C eb om .c m om un ity

W eb Ele ct ric al En w gi w ne w er .e in ew g

EE Making Wireless Truly Wireless: Need For Universal Wireless Power Solution

Dave Baarman Director Of Advanced Technologies

"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium laudantium,

doloremque totam

rem

aperiam,

eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae

ARTICLES

dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil

JOBS

COMMUNITY

DEVELOPMENT TOOLS

JOIN TODAY


Breathe new life into medical product interfaces

NXP’s proven interface products enable medical and health system designers to add features with minimal modifications. Within our portfolio you’ll find LCD displays and capacitive touch interfaces, system connectivity bridges & UARTs, LED controllers, real-time clocks, and I2C-bus peripherals & enablers. To learn more, visit http://www.nxp.com/campaigns/medical-interfaces/2248


EEWeb PULSE

Alternative

Energy Vehicles: Energy Storage and Supply

24

EEWeb | Electrical Engineering Community


TECH ARTICLE

Alex Toombs

Electrical Engineering Student

Energy E n e r gy rresource e s o ur ce aavailability v a il a b il it y is isa mamong o n g tthe he most most critical cr it ica l co n ceconcerns r n s t h athat t we we wilwill l f a face ce in int hthe e coming co m in gyears. y e a r sWith . Witexceptionally h e x ce pt io n a l volatile l y v o l a tgasoline il e g a s oprices and the knowledge that peak oil production l in e pr ice s a n d t h e k n o wl e d ge t h a t pe a k o i l is in the near future, many focus their time upon pr o d uct iothe n isefficiency in t h e n eof a r energy f ut ur eusage , m a n yforf o c u s improving t h e ir t im e upo n Researchers im pr o vin g twork h e e f fon icieeverything n cy o f e n e r g y transportation. from us a geimproving f o r t r a gas n s pomileage r t a t io nof. Rcurrent e s e a r chvehicles e r s w o rto k on designing e ve r y t h inentirely g f r o m new im pr technologies o v in g ga s m ilupon e a gewhich o f c u rto operate a vehicle. Most agree that gasoline r e n t ve h icl e s t o d e s ign in g e n t ir e l y n e w t ec hvehicles, though they are the most supported in n o l ocurrent gie s upoinfrastructure, n wh ich t o o peare r a tnot e a avelong-term h icl e . M o s t the a gr e e t hto a t our ga stransportation o l in e ve h icl e s , energy t h o ugh needs. they are solution

t h e m o s t s uppo r t e d in t h e cur r e n t in f r a s t r u cElectricity t ur e , a r e niso ta apossible l o n g- t fuel e r m ssource o l ut iofor n t o o ur alternative energy vehicles, considering t r a n s po r t a t io n e n e r gy n e e d s .

the current infrastructure for generation and distribution. Batteries, fuel cells, or E l e ct r icit y is a provide po s s ib lelectricity e f ue l s o urwith ce f owhich r a l t ear n aultracapacitors motor wheels t iv e e nturns e r gy the ve h icl e s , codirectly, n s id e r in gbypassing t h e cur rthe ent combustion in f r a s t r uctofurfuel. e f o While r ge n ethese r a t io nsolutions a n d d is tcarry r ib u t i ao n . lot B a t of t e rpromise, ie s , f ue lthey ce l lalso s , o rhave ul t rmany a ca padifficulties cit o r s p r o v i d e inherent to their technologies. Currently, our e l e ct r icit y wit h wh ich a m o t o r t ur n s t h e infrastructure best supports gasoline vehicles, as wh e e lare s d irnoe ct l y , b y pa s wide-spread s in g t h e co mlocations b us t io n ofor f there convenient, f ue l . Wh il e of t h erefueling. s e s o l ut ioAsn sa ca r r y aalternative l o t o f p r o mother types result, energy facet ieaschallenge is e , t h eautomotive y a l s o h a vtechnologies e m a n y d if f icul in h e r e n t t o not only from the research needed to go t h e ir t e ch n o l o gie s . Cur r e n t l y , o ur in f rinto a s t r u ctheir development, but also from the lack of t ur e b e s t s uppo r t s ga s o l in e ve h icl e s , a s t h e r e infrastructure. As gasoline grows more expensive, other alternatives become much more economically viable.

Visit www.eeweb.com

25


EEWeb PULSE

Prius Battery Cells (courtesy of Wikimedia user LossIsNotMore)

F

uel cell technology has been around for a number of years already, but has not yet been commercially viable in automobiles. Fuel cells are devices that extract chemical energy from a fuel source into electricity, which can then be used to power motors or other devices. The first fuel cell was developed in 1839, and has been used in a variety of applications since. For instance, Google and other companies with large server farms use Bloom Box, a popular type of fuel cell, as onsite backup for high uptime computing. The most common fuel cells are known as proton exchange membrane (PEM) fuel cells. These take in hydrogen as a fuel, cracking apart the hydrogen molecule, to produce two protons and two electrons per molecule. A selective membrane allows the hydrogen ions to diffuse across the membrane and recombine with oxygen as water, a clean byproduct. The electrons cannot flow through the membrane, and are thus constricted to flow from the anode to the cathode, through a load. An illustration of this is shown below.

26

PEM fuel cells may be very quickly refueled, like gasoline tanks in cars. A fresh tank of hydrogen fuel allows a cell to continuously run. The downside here is that it is very difficult to transport hydrogen in any economical fashion, as the gas, liquid, and solid forms of hydrogen are all

PEM Fuel Cell (courtesy of Wikimedia user Rosemary)

EEWeb | Electrical Engineering Community


incredibly difficult to purify and store. Even if the tanks could be kept safely, they would take up too much room in a vehicle to be used effectively. Additionally, there is no refueling infrastructure available for fuel cells, unlike gasoline—which has the advantage of being available everywhere. One advantage PEM fuel cells have is the considerable lack of harmful emissions from the automobile that the fuel cell powers, compared to a gasoline-powered vehicle. That does not include the information that these fuel cells require ultra-pure hydrogen, as impurities like carbon monoxide can poison the catalyst used to crack hydrogen molecules. Purifying hydrogen can be very harmful for the environment, negating the otherwise clean nature of PEM fuel cells. Two commercially-viable alternatives to both fuel cells and gasoline are electric and hybrid vehicles. Broadly defined, electric vehicles have the capability to run entirely off of stored electrical energy, typically in batteries or capacitor banks. Tesla produces entirely electric vehicles today, while many companies like Toyota and Chevy produce commercially available hybrid vehicles. A hybrid vehicle is one that has both a gasoline engine and an electric motor, meaning that it is able to run off of both fuel sources. A wholly electric vehicle runs off of a battery pack stored within the vehicle, drawing electricity from it to power an electric motor. While nearly every vehicle has a battery in order to start a car, the batteries in electric vehicles are different due to their large capacity and higher voltages. Several different chemical compositions make up the batteries commercially available for cars today, including lithium ion and lead acid battery stacks. Compared to gasoline-powered vehicles, electric and hybrid vehicles enjoy a similar infrastructure that is widely available. Most electric vehicles have complex charging circuits that allow them to charge off of a wall plug in a household, meaning that anywhere that there is electricity, an electric car could be recharged. Due to the lack of combustion, electric vehicles produce very little in the way of byproducts. As with hydrogen, this does not take into account generation of the fuel. Coal plants supply a large portion of our electricity, so charging a vehicle is expensive and perpetuates the harmful process of burning of coal as fuel. The charging of these battery packs is often taken for granted, as the process is very complex and depends largely upon the cells used to store the charge. Batteries are subject to overheating, overvolting, and undervolting, all three of which are conditions that present safety and battery lifetime concerns. As a result, microcontrollers are generally used to measure voltages of each cell in

TECH ARTICLE

Alternative

Energy Vehicles:

Fuel cell cars: Fuel cells are devices that extract chemical energy from a fuel source into electricity, which can then be used to power motors or other devices.

Solar powered cars: Photovoltaic cells (PVCs) made of semiconductors absorb energy from the sun and convert it to electricity.

Hybrid Cars: Engine switches between electric and gas power based on vehicle speed.

Compressed Air Cars: Compressed air tanks release air that expands in the engine--not unlike steam powered vehicles.

Bio Diesel Cars: Engine uses biofuels made from biological ingredients like soybeans and corn instead of fossil fuels.

the stack, as well as temperature. When temperatures or voltages get too high, the cell is removed from the stack. Even cells in series can have issues with overheating and overvolting. Undervolting is dangerous to battery lifetime as well. If the voltage of a lead acid battery cell drops too low, for instance, the cell will be unable charge again. Considering all of the technology keeping battery packs safe and long-lived, the price tag for a cell similar to the one pictured in figure 3 ranges in the tens of thousands of dollars, and generally lasts between five and ten years. Visit www.eeweb.com

27


EEWeb PULSE

Power and Energy Densities (courtesy of Wikimedia user Stan Zurek)

Hybrid cars are interesting from an infrastructure perspective, because they have both gasoline engines and battery packs. In comparison to electric vehicles, however, the batteries in hybrid vehicles are generally entirely charged through normal vehicle operation. The electric motor starts the vehicle, which then can run on gasoline. The combustion of the fuel turns the axel, which is connected to an alternator that converts mechanical energy into electrical energy. Generated current is then converted from AC to DC, charging the batteries back up. Additionally, every time the car brakes, some of the energy is recovered to further charge the batteries. Some research has been done on the possibility of using ultracapcitors, which have a lower energy density than batteries but a much higher power density, in order to store energy from regenerative braking. A comparison of energy and power densities is shown below. Ultracapacitors have long lifetimes compared to batteries, easily withstanding rapid charging and discharging—the same sort of stress that can kill your car battery in 5 years or less. Because of this, energy recovered from regenrative braking could charge an ultracapacitor bank, which is subsequently discharged when the vehicle needs the power—such as when it is driving up a hill, or through mud or snow. This technology is beneficial not only to electric and hybrid vehicles, but also those powered by almost any means.

stations, hybrids make more sense than do fuel cell vehicles. As I mentioned earlier, estimates project that oil is approaching peak production and our reserves are running low. New techniques enabling us to recover more oil require expensive technologies and research. Over time, this will drive up the price of gasoline and these other technologies become more economical for long term. To make these technologies viable, many improvements are needed to improve energy usage, energy storage, and vehicle controls.

About the Author Alex Toombs is a senior electrical engineering at the University of Notre Dame, concentrating in semiconductor devices and nanotechnology. His academic, professional and research experiences have exposed him to a wide variety of fields; from financial analysis to semiconductor device design; from quantum mechanics to Android application development; and from low-cost biology tool design to audio technology. Following his graduation in May 2013, he will be joining the cloud startup Apcera as a Software Engineer.

As long as gasoline stations are much more prevalent than high-current charging stations or hydrogen fueling

28

EEWeb | Electrical Engineering Community

Âť CLICK HERE to comment on the article.


TECH ARTICLE

Online Circuit Simulator PartSim includes a full SPICE simulation engine, web-based schematic capture tool, and a graphical waveform viewer.

Some Features include:

• Simulate in a standard Web Browser • AC/DC and Transient Simulations • Schematic Editor • WaveForm Viewer • Easily Share Simulations

Try-it Now! www.partsim.com

BeStar

®

Teamwork • Technology • Invention • Listen • Hear

ACOUSTICS & SENSORS PRODUCTS

INDUSTRIES

Speakers

Automotive

Buzzers

Durables

Piezo Elements

Medical

Back-up Alarms

Industrial

Horns

Mobile

Sirens/Bells

Fire / Safety

Beacons

Security

Microphones

Consumer

Sensors

Leisure

Preferred acoustic component supplier to OEMs worldwide

bestartech.com | sales@bestartech.com | 520.439.9204 Visit www.eeweb.com

QS9000 • TS/ISO16949 • IS O 14001 • IS O 13485 • IS O 9001

29


EEWeb PULSE

Desi

& Anal IN 80NM CMOS TECHNNOLOGY

30

EEWeb | Electrical Engineering Community

1

part

OF TDC CONVERTER ARCHITECTURES


ign

TECH ARTICLE

lysis

1

ABSTRACT:

By:

Umanath Kamath, Cypress Semiconductors Javier Rodriguez, Strukton Rolling Stock

A new paradigm of analog-to-digital converters based on resolving signals based on time as the quantity has found importance because of its varied applications from high energy physics to the more recent replacement of traditional RF based phase detectors. This is also because of their suitability as well as superior performance in deep sub-micron processes. This article presents an overview of such an implementation. The article is split into two parts. The first part explains the various topologies for realizing a time-to-digital converter with their trade-offs while the second part will go through implementation of a TDC in 80nm CMOS process. The article concludes with results of the transistor level implementation giving the reader an understanding of the method while also appreciating the various applications to which it can be applied.

Visit www.eeweb.com

31


EEWeb PULSE

1. INTRODUCTION

• Area (computed as the sum of the transistor widths)

Nowadays, many systems use time measurements as part of their calculations to perform a certain functionality. However, very often, these measurements are not required to be very accurate and a millisecond or microsecond resolution is enough. Time-to-digital converters (TDCs) are devices which can be used when more resolution is required. They can accurately measure the time difference between two events, usually achieving a resolution within the picoseconds scale. Their applications range from high energy physics and astronomy instrumentation to RF synthesizers and medical equipment.

• Power consumption

Although there are different ways to perform this operation, their basic principle consists of a signal going through several delay elements (typically CMOS inverters), which provide the system time quantization. Hence, as if it were an ADC, a digital code is generated depending on how far the signal has gone. 1.1. AIM OF THIS ARTICLE As can be inferred from the previous section, CMOS technology has become the most popular way to implement these high precision time measurement devices. This document presents the analysis and design details of a TDC architecture using the 80 nm CMOS technology. This article serves as an overview for various architecture and analysis of the chosen architecture which will be presented. Thorough analysis of the scheme followed by transistor level design results would be presented.

2. DESIGN SPECIFICATIONS As mentioned in the introductory section, the proposed project targets to design of a time-to digital converter using the 80 nm CMOS technology. This design has to meet the following minimum requirements to be considered as a valid solution: • Minimum resolution: 40 picoseconds • Number of stages: 32 (5-bit binary readout) • Minimum transistor size: Lmin=80 nm, Wmin=120 nm The design will be carried out at the transistor level, i.e., not taking into account the circuit layout. Any other parameter which is not present in the requirements above mentioned is free to choose. The metrics used to determine the quality of the design will be the following: • Speed (time resolution)

32

• Sampling rate (number of acquisitions per second)

3. PROPOSED TDC ARCHITECTURE Once the project goals have been described, the next step will consist of selecting one among all the available CMOS based TDC architectures. 3.1. State of the art TDCs

3.1.1. Single delay line In this TDC architecture [3], the target signal propagates through a delay element chain, each delay element output being connected to a latch. A reference signal triggers the latches, sampling the delay chain state. The value stored within the latches indicates how far the target signal has propagated through the delay chain, hence providing an accurate time measurement in a pseudo thermo-code format. START

STOP Q0

Q1

Q2

QN-1

Figure 1: Single delay line TDC architecture

This is the simplest CMOS based TDC architecture, due to the fact that it uses very simple elements such as delay elements and latches. However, its resolution is in direct relationship with the delay elements propagation time, giving typically a lower resolution when compared with other techniques.

3.1.2. Vernier delay line The Vernier delay line TDC is a variation of the previous architecture. In this solution, additional delay elements are added to the reference signal which triggers the latches. These new delay elements have a slightly smaller propagation delay than the ones within the target signal path, causing the reference signal to propagate faster than the target signal and, therefore, providing a resolution which is equal to the difference between the delay elements propagation time. With respect to the previous solution, due to a differential time measurement this approach will need twice the

EEWeb | Electrical Engineering Community

QN


number of delay elements. Because of both area and power consumption increases, while the sampling rate is reduced since the reference signal has an additional propagation time through its delay chain. td1

td1

td1

TECH ARTICLE

3.1.4. Local passive time interpolation The last considered TDC architecture is presented in [4], which achieves sub-gate delay resolution by time interpolation between two signals delayed by one inverter propagation delay, as seen in Figure 4. Using this approach, a new set of signals between the two signals mentioned before, can be defined as: interpolated signals

td2

td2

VB

td2

relevant voltage range for comparator

Figure 2: Vernier delay line architecture [4] VA

3.1.3. Pulse shrinking The two previous approaches were based on the delay elements propagation time to compute the desired time measurement. However, this is not the only possible solution. The TDC solution proposed in [2], use the structure showed in Figure 3, where the target signal loops for a given time. Delay Line Total k stage

counter Vin

RESET

Tin

Tr

1

1

Ă&#x;

1

1

Vout

output pulse

Figure 3: Pulse shrinking TDC architecture [2]

Within this structure, the target signal width is iteratively reduced due to the different gate sizing, until it disappears. The reduction rate is given by the following expression:

Finally, a counter will store the number of iterations within the loop which is related to the target signal width. The main advantage of this architecture is clearly the reduced area needed for a given resolution. In fact, the delay elements can be set independently of the required dynamic range. However, a high resolution TDC based on this approach demands a slow width reduction, hence decreasing the acquisition rate.

Tinv

Figure 4: Time interpolation example [4]

Therefore, by just using a simple resistive divider between Vin and Vd, we can obtain several uniformly distributed signals between them, increasing the system resolution. The complete system will be composed of several coarse grain delay stages in a differential delay line configuration along with resistors which will provide the fine grain time measurement by means of time interpolation. 3.2. 2-D Vernier delay line Looking at all the different available TDC architectures described above, all of them have several benefits and drawbacks, making the selection a tough task. However, we focused on several features which we considered the most relevant ones, thus helping us to choose our final design architecture. One particular characteristic which seems to be common to most of the approaches is the number of delay stages needed to achieve a certain time dynamic range, i.e., growing in an exponential fashion with the readout number of bits. This feature limits in practice the TDC linearity due to delay element manufacturing mismatch and makes it very sensitive to phase noise, thus limiting its final resolution. Therefore, the number of delay stages became one of the main criteria in order to select a specific TDC architecture. Another issue that we were worried about is the minimum delay introduced by the delay elements themselves. Some TDC architectures rely on the fact that the resolution is Visit www.eeweb.com

33


EEWeb PULSE

determined entirely by the delay element propagation delay within the chain. However, by using a Vernier delay line based architecture, the absolute propagation delay is no longer an issue since the resolution is a differential measure between two different delays, only limiting the TDC acquisition time. Taking into account these two criteria and after doing some literature survey, we finally found out one architecture which met both requirements. The solution proposed in [1] introduces a novel Vernier delay line TDC architecture which manages to reduce significantly the number delay stages for a given number of bits. The rationale behind this solution realizes that in a regular Vernier delay line only computes the time difference between elements located in the same position, thus achieving a linear delay function. However, the paper states that if time difference measurements between different delay elements were allowed the number of measurements would increase dramatically, yielding a wider time dynamic range. In particular, for an N step TDC, the number of delay stages is proportional to √(“N”), instead of N. All these time differences can be plotted into a, so called, Vernier plane, as shown in Figure 5. As can be seen, only a fraction of this plane is useful, i.e., has a linear and uniform succession of time differences which can be used. Therefore, latches will be placed in those plane dots to detect the point where the two signals meet, thus giving a readout in thermo-code format. Besides, there are plane points which do not contribute to the time dynamic range, thereby they are not used and -4∆

-5∆

-∆

0

4∆

3∆

7∆

(t2-3∆)

Li

-2∆

ne ar

Ve r

ni e

rD

el

ay

Li ne

-8∆

5∆

2∆

6∆

9∆

(t1-4∆)

10∆

13∆

can be neglected. However, due to a simpler and more homogeneous design in the delay elements, these points are usually connected to the delay line thus acting as dummy load capacitances. In this sense, S-R latches are preferred over D flip-flops since they have a symmetric structure which will also help delay homogeneity in both the paths. Finally, as has been done in the paper where this solution is proposed, we will limit our design space to delay lines whose elements are compliant with the following relationships:

Under this assumption, any point in the Vernier plane can be expressed as:

Moreover, as the useful values of y(i) are limited to the range [0,k], the above equation can be inverted, yielding the following relationships:

4. PERFORMANCE ESTIMATION Before discussing our design, we would like to present some performance estimations covering the main design parameters. They are based on the prior analysis of both the simulation models used for this 8∆ 12∆ project and the chosen architecture features. Besides, for the following estimations, we are assuming that both 11∆ 15∆ START and STOP signals will remain active for the whole acquisition time. 14∆

17∆

21∆

Resolution=∆=t1-t2

STOP START

Delay Line X (t1)

Figure 5: 2-D Vernier plane for τ1=4Δ and τ2=3Δ

34

18∆

EEWeb | Electrical Engineering Community

4.1. Resolution From the previous discussion, it may seem that with this Vernier architecture we can go for a really high resolution. However, we are aware that this approach is not realistic, since we are not considering at this level several design parameters such as interconnection delay, manufacturing mismatch and noise, which will degrade the circuit


linearity and limit our design performance in a real implementation. Hence, our goal for this project will be to achieve a resolution of at least 10 ps.

TECH ARTICLE

dependent on the matrix structure. Again, an approximate figure for the acquisition time can be computed as:

4.2. Area The number of delay stages (in both x and y axes) is, without a doubt, the main issue in this design, as it will determine to a great extent the circuit area and the power consumption. Indeed, increasing the number of stages will generate a larger number of dummy devices within the matrix which will consume power and waste area. On the other hand, if we were to consider the capacitive load which each delay stage has to handle, a square matrix structure will give us the same capacitive load in both x and y axes, thus being able to control the propagation delay difference by just sizing the delay elements in the right way.

Concluded in Part 2

REFERENCES [1] Vercesi, L., Liscidini, A., Castello, R., “Two-Dimensions Vernier Time-to-Digital Converter”, IEEE Journal of SolidState Circuits, August 2010, pp 1504-1512. [2] Chen, P., Liu, S. I., Wu, J., “A CMOS pulse shrinking delay element for time interval measurement”, IEEE Transactions on Circuits and Systems II,: Analog and Digital Signal Processing, September 2000.

In addition to those considerations, we must take into account the area consumed by the 5-bit encoding logic which will provide the system output, which can be considered as constant.

[3] Staszewski, R. B., Vemulapalli, S., Vallur, P., Wallberg, J., Balsara, P. T., “1.3 V 20 ps time-to-digital converter for frequency synthesis in 90-nm CMOS,” IEEE Transactions on Circuits and Systems II: Express Briefs, March 2006.

Therefore, there must be a compromise between keeping the matrix as small as possible and making the rowcolumn ratio closer to 1, making area estimates totally dependent on the final matrix structure. For a general Vernier matrix (m rows and n columns), an approximate area estimate will be:

[4] Henzler, S., Koeppe, S., Lorenz, D., Kamp, W., Kuenemund, R., Schmitt-Landsiedel, D., “A Local Passive Time Interpolation Concept for Variation-Tolerant HighResolution Time-to-Digital Conversion”, IEEE Journal of Solid State Circuits, July 2008.

4.3. Power Supply voltage can be reduced to decrease the circuit power consumption without affecting its resolution. However, this will have a negative impact on the sampling rate since the overall propagation delay will be increased. By looking at the delay simulation results, a supply voltage of 0.8 V seems a reasonable choice.

[5] Rabaey, J. M., Chandrakasan, A., Nikolic,B., “Digital Integrated Circuits, a Design Perspective” 2nd edition, Prentice Hall, 2006.

» CLICK HERE to comment on the article.

4.4. Sampling rate The sampling rate will depend mainly on both the number of delay stages and the total delay introduced by these stages for the worst case scenario. Hence, even if the absolute gate delay is not an important issue for the TDC resolution, it must be bounded by the accumulated delay of the longest delay line. Besides, by now we are neglecting the propagation delay introduced by the matrix latches and the 5-bit encoding stage. Again this figure is mainly Visit www.eeweb.com

35


Vend-O-Part

Bertz

Dreamin of Overtime


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.