How Much Data in that Fiber?

In the “Age of Accelerating Returns” we are inundated with mega- giga- and tera-figures marking technological– and supposedly human– progress.  These numbers are now well beyond our capacity to comprehend.  It’s like the new finding that there are about 3E23 stars in the universe– I have absolutely no idea what that means.

One example is communications capacity.  We are often told “this link could transmit the equivalent of the Library of Congress in __seconds.”  Have you been to the LoC?  Do you have any idea how big it is?  Me neither.  Even if you knew there were 32 million books in the library, it doesn’t get you closer.

So I was thinking about a different way to explain the capacity of the latest fiber-optic transmission systems.  Long-haul systems (with reach of 1000’s of km) are getting to the point where they can shove 10 Terabits/second down the core of an optical fiber:

Light-carrying core of a long-haul optical fiber.

A side remark on optical fiber: if you shone a flashlight through a 10km thick slab of glass, how much light do you think would make it out the other side?  Modern optical fiber transmits 67% of (infrared) light over that distance!

A lot of very cool engineering goes into making this work.  The standard is DWDM DP-QPSK = dense wavelength-division multiplexing, (coherent) dual-polarization quadrature phase-shift keying.  You can get your tech jollies reading a pretty good overview written by the Optical Internetworking Forum.  The result– if you factor in all-optical amplification– is that you can transmit data for thousands of kilometers entirely optically.

What is 10 Terabits per Second?
I was thinking about ways to visualize this without resorting to the Library of Congress.  One thing we understand pretty well is video.  And in fact, much of the need to light up fibers with ever more capacity is driven by video. You can transmit a 1080p HD video stream with about 5Mbit/sec capacity.  That means a single fiber can push 2,000,000 HD video streams.

What do 2 Million Video Streams Look Like?
Again, 2 million is a number that is too high for humans to visualize.  I thought about what it would look like if you stack up that many streams.  For that exercise, let’s use 42″ flat panel TVs… sort of the “standard” flat TV size these days.  Each (generally LCD) panel measures 0.93m x 0.5m, with 52dpi pixel resolution.

I wondered what it would look like if you stacked (without bevel) that much display capacity– how much live, HD video a single fiber could carry.  For comparison with the fiber, let’s keep it in a circular format.  Here is what I get:

HD teradisplay driven by single optical fiber.

So a 9-micron diameter fiber could feed a very high-resolution 1110m-diameter display… a factor of 1.5E16 larger area (OK, not possible to comprehend… just picture a piece of glass the diameter of a red blood cell perched on top of the Burj Khalifa there).  And if you need help understanding how tall that building is, here’s a video.

4 Terapixels— now that has some serious potential!  My favorite use would be to make large sections of park, city, university into telepresence walls (video, sound, maybe 3D) to a sister city on the other side of the world.

By the way, the fact that you can feed 4 Terapixels with 10 Terabits (2.5bits/pixel/frame) for high-quality video is a testament to the efficiency of H.264 compression.  The ability to run H.264 on even handheld devices is a direct result of Moore’s Law.  Without such good compression, the cost of transporting video would be unsustainable (though I’m sure optical component suppliers wish it were a little less efficient).

Are There Enough TVs for That?
Easily.  The world already buys 180 million LCD TVs per year.   A single plant (Sharp’s Sakai City/Osaka site) has a capacity of 7.8 million square meters of LCD per year– and it is being eclipsed by new plants in China.  The visual comparison of that display area I’ll leave for another day…

Advantage: Cost?

(Could be subtitled: Learning from Some Past Mistakes!)

Have a Series A startup that makes a new kind of device for a hot market?  Photovoltaics, displays, 2D or 3D image sensors, intertial sensors, batteries, solid-state lighting, FLASH memory replacement?  Is “• lower cost” a bullet point on Slide 1 of your pitch?  That’s probably a mistake– unless you have a comfortable 10x cost advantage on the competition.

As I was reviewing my venture experiences & observations, it dawned on me that this low-cost promise to customers and investors is the root of many, if not most problems with venture-backed device startups.  It is such a tempting promise to make, however.  Who doesn’t like the concept of “plug-compatible, half the price?”  You should have some enthusiastic customers.  Hiring a sales team won’t be much of an effort.  So, VCs love it too– and let’s just admit it: they are the primary customer of an early-stage device venture (they buy a lot of stock!).

The problem with the promise is that it sets you up for a race you can almost certainly not win.  The rare case that “wins” is where a big sucker buys the company in a competitive frenzy.  The more likely case is that you run out of money somewhere on this trajectory:

The Timeline

  • Discovery – “it works!” in the lab and it looks simple to manufacture.  Claim big cost/price advantage over last-generation incumbent.
  • First hiccup – choose from hiccup sources below.  Adds 25% to cost to fix it, and 1 year to development.
  • Second hiccup – same deal, over again.
  • Finally ready to go!
  • Low volume production start at high cost… catch up with incumbent with volume, experience.

Each time a hiccup hits, you’ll observe with increasing dread the falling prices of the incumbent technology.  In a hot market, you better reckon with 20%/year price reductions.  So if your original 3-year commercialization plan turns into 5 (optimistic), the incumbent has dropped prices by 67%.  You’ve added 56% to your anticipated costs.  That has eaten up a 5x initial cost advantage.  Add a 2x margin, and you see where my 10x rule of thumb comes from!

Hiccups
For device startups, there are three typical hiccups (besides the technology turning out not to work).  If you’re lucky, you only experience one.  More likely, you’ll deal with two or three.  My entirely redundant illustration:

  • Operating condition failure.  Doesn’t work properly at temperature, voltage, vibration, etc.
  • Reliability failure.  Doesn’t survive accelerated testing.
  • Yield failure.  Doesn’t yield well off the production line.

Manufacturing Scale
By the time you are ready, the minimum manufacturing scale has grown significantly.  This means a bigger manufacturing investment, and even higher early costs.  The resulting cumulative losses in the early production years are often unsustainable.  Any reaction by incumbents can make this gap significantly more painful.

In the Obama era the strategy to fill this gap (for solar or batteries, at least) seems to be to insist the taxpayer should fund it.  That’s not something a new venture can count on.

Related/Resulting Venture Problems
There are a couple of potentially fatal issues that result from the “race for cost advantage.”  They can be the basis for future posts here.  Briefly, they are:

  • Premature bulk-up / spending.  Have you seen the ventures with only 5 possible industrial customers for their device, no beta units to ship, but featured in the New York Times and other “hot tech” outlets?  It’s a symptom of a team that has grown too quickly and in the wrong dimensions.
  • Wobbly technology tower.  Running too fast causes you to assume too many things, and work on conjecture.  You fail to do properly designed experiments.  You fail to measure capabilities. You develop in parallel to mate up with high-risk paths at fantasy dates.  One result is that when there is a hiccup, you often end up doing enormous amounts of work over again instead of having a good foundation to build off of.

So – What?
My advice is to find an axis other than cost on which you can compete– even if in only a niche of the overall market you are targeting.  If your technology’s only advantage is lower cost, think twice about starting in a hot market (if it’s a stagnant market, and you can create a new segment, it’s a different story).  Ideally, you have a near-term “performance” story, and a long-term cost advantage story that can be realized as volume grows.

Future of Equipment Maintenance?

Rather than a dry commentary on hardware start-ups, I thought I would record a couple of musings on the trend in surgery– and how it might be extended to machine repair.

I suspect the first “machines” ever built, in version 2, were built to be maintained.  Breakage of critical components was a fact of life.  So you designed the machine to be easily disassembled and reassembled.

leonardo da vinci: gears

Machine Built for Reassembly

Let’s look at another even more successful machine: the human body.  Each piece (except the very tough layer of keratin we call skin, hair) is extremely fragile if it is exposed to the environment, or if it is separated from the support systems.  Why does it work so well?  (1) it is a sealed system, with very good filters; (2) it defends itself vigorously against intrusions; and (3) it regenerates– we are learning more about stem cells every day.

Another Even More Successful Machine

When we decided to start repairing the human body, we approached it like the machines we knew: open the lid, disassemble, fix, reassemble, close the lid.  Use tools that looked very much like everyday implements: knives, scissors, pliers.  Only the human body really wasn’t made like a machine.

Fixing It Like a Machine

Anyone following medicine knows we have rapidly moved to a different model for fixing the human body: leave the system sealed to the maximum extent, and target only the problem itself.  Often fix the problem in place with stents, lasers, ultrasound, RF energy, “glue,” or even new tissue.  Leaving the system sealed has tremendous benefits.  You reduce the potential for collateral damage, and minimize the introduction of contaminants.  You don’t forget a pair of scissors inside.  The first generation used some pretty simple tools:

A Common Endoscopic Procedure

The second generation has harnessed advances in robotics to produce something far more sophisticated and agile:

Next Generation: DaVinci Telerobotic Surgical System

So– any implications for machines?

I expect the design of machines (from servers to jet turbines, and everything in between) will shift over the coming years, from a model where you can disassemble/reassemble easily, to one where the system is built sealed, and diagnoses/repairs/upgrades are done using minimally-invasive tools.  The potential advantages include:

  • Design systems for efficiency, cost, performance and better aesthetic value, not for traditional repair.  As an example, if you were to design a server rack for efficiency and performance, it’s unlikely you would make it based on plug-in cards/units.  In the extreme case you might want to have the whole thing immersed in cooling liquid.  An existing example is the MacBook Air– hardly built for modular upgrades and expansion– but for size, battery life, thermal performance.
  • Reduce maintenance errors.  Just as with the human machine, equipment repairs and upgrades often result in collateral damage.  How many times have you taken something apart, put it together, and had leftover screws?  Or heard something rattling inside?  Or had to force the cover closed for a reason you didn’t want to investigate?  Minimally-invasive maintenance on a sealed system would target the source of the problem, and leave everything else alone.

The tools for minimally-invasive manual repair already exist.  The first generation is pretty simple and has been in use for complex machinery for decades.  It’s no coincidence that they resemble first-generation minimally-invasive surgery equipment (and they are in some cases produced by the same companies, like Olympus):

Optical Fiber Borescopes

What I am interested in is what the next generation looks like– where robotic and imaging technology are brought to bear on this field.  Already there are some really interesting technologies being demonstrated.  For example, companies like OC Robotics are using “snake robot” technology an applying it to inspection:

Locally, Energid has developed a platform enabling real-time control for complex, “kinematically redundant” robotic limbs that can avoid even moving obstructions and inspect, repair.  They have also integrated vision-based guidance so a robot limb could in theory navigate through a machine based on a CAD file.

Much of this work is starting to be applied in environments where human hands are not an option: nuclear power reactors, outer space, deep sea.

Now comes the exciting part: taking next-gen minimally invasive machine inspection and surgery (or even assembly) to commercial applications.  And changing the way equipment is designed to take advantage of it.  This will require new extensions to CAD packages as well, to optimize design for repair, and potentially to co-design the “surgical tools” to do it.  That’s an opportunity we never had for the human body!

Consumer Picoprojectors: Not Happening

After debating this market a number of times with fellow entrepreneurs and investors I thought I would write down why I’m a skeptic.

Take a look at the following marketing images from some of the well-known picoprojector players.  What is wrong with them?  They are all photoshop creations.  The answer below– and why mass-market picoprojectors are not going to happen…

 

Bullshit Picoprojector Images

 

The problem?  While it’s possible to project light, it is pretty tough to project darkness! Apparently, though, that’s happening in each of these “application examples.”  Sections of the projected areas are darker than the surface they are being projected on.

For the same reason physically impossible images were used, I believe the general-purpose picoprojector market is a hopeless cause.  We love high contrast ratio in displays.  LCD manufacturers are using it as a key measure of competition (1000:1 is an absolute minimum these days).  Very simply put, contrast ratio is the ratio of the brightest bright to the darkest dark in the image.  Unfortunately, when using a projector, the darkest dark is set by ambient lighting.

Let’s stay away from outdoors and pretend we’re using a picoprojector in an office (for those impromptu PPT reviews at the water cooler).  The recommended illuminance level in an office is 500 lux, or 500 lumens/m^2.  That sets the “dark” level for the contrast ratio.  We’ll compare to the iPhone 3GS for which I have decent LCD stats.  It has a contrast ratio of 47:1 in high ambient lighting (you’ll agree that’s not a fantastic viewing experience).  To achieve the equivalent, we need the picoprojector to provide a luminous flux in bright areas of 46*500 = 23,000 lux at the viewing surface.

Now take a look at the picoprojector specifications.  3M MPro150 ($395) = 15 lumens (among the highest).  If you project an image the size of an iPhone screen, you get an 8.5:1 contrast ratio.  At iPad size, it’s down to 1.6:1 CR (that means brightest areas are 60% brighter than the darkest).

Besides finding a way to project darkness, the only other approach is to project more light.  Unfortunately, one very quickly runs into battery and thermal issues.  It’s just easier to present/watch a movie off a tablet.

Does that mean curtains for picoprojectors?  No.  Most of the suppliers will go away.  However, niche applications will emerge for the current architectures.  I recently had a funny conversation with someone involved in this industry, who said they were popular among workers living in factory dorms, after lights-out time!

I have been instead looking at a picoprojector architecture that functions in high ambient light environments– but focused on commercial/industrial applications, not doing PPT review or vacation slide shows next to the watercooler.

Venture Pay Reset?

Looking at a piece in the Wall Street Journal covering one of the annual venture compensation studies, I was struck by how out of touch venture CEO pay has become with results.  In ways, it’s simply a scaled down model of what has occurred in public companies.  But there is more to it.

Something happened to the venture industry after 2000.  And it didn’t just happen to the greedy VCs who are often criticized for oversized funds with 2% “I buy a new plane whether you get a return or not” management fees.

The same mentality set in with CEOs and management teams.  Instead of a big exit being everyone’s singular focus, it became a “nice to have.”  Part of this was just facing up to reality, of course.  But part of it grew from a circuit of recruiters and roving “professional venture CEOs“– and of course the compensation surveys themselves.  To hire a top-quality CEO, the story goes, you need to pay above-average salaries (especially if you are in an expensive area like CA, MA, NY!).

I have seen it first hand, and yes, participated in it.  It’s infectious.  One popular rationale is “it’s the VCs money, and they make a $1M+ per year regardless– why should it be different for us?!

That is a far cry from when we were starting Aegis Lightwave in 1999, paying ourselves living expenses, and going down to zero for months on end when cash got short.  Everyone on the team got the idea, and counted pennies.  That frugal culture has persisted at Aegis, and helped make it consistently profitable.

The Web 2.0 generation of startups is resetting expectations— both for VCs and for entrepreneurs.  Let’s hope it translates to other traditional VC-backed industries.

Device Death Blossom

It looks like a beautiful flower.  Sniff it too long and you’ll have plenty of time to dream about it.

I have seen probably a dozen component venture presentations (I just found one of my own!) that have what I call the “Death Blossom” slide.  Apologies to those who aren’t B-Grade SciFi fans and have seen The Last Starfighter.  The GunStar (wow, I remember I was blown away by those graphics!) is equipped with the “Death Blossom” feature which spins the ship like mad and fires its missiles to vaporize every bad guy in sight.

Ventures formed around a materials science or novel device breakthrough often imply they can do the same.  More likely, they will spin out of control and launch a battery of very expensive missiles into deep space.

The Standard Optoelectronic Death Blossom

The Standard Optoelectronic Death Blossom

Yes, I’m picking on optoelectronics for this one– since I’m familiar with building this diagram myself 12 years ago.  The one where you have a device that allows some exchange between electrons and photons… and it can do everything better than what’s in the market today!

The temptation, particularly among first-time entrepreneurs, is to develop the “platform” and license off applications to companies who actually build things.  This may work in pharmaceuticals– I am no expert on that market– but it certainly does not work in optoelectronic devices.

The simple truth is that to commercialize a single device, with a single advantage over the incumbent technology, typically takes 5-10 years.  It requires an incredibly focused, systematic effort around a specific set of requirements.  Usually you need to make a lot of system trade-offs around the peculiarities of the new device– and hopefully to bring its advantage to the fore.

So pick a market for your device that is big enough at the system level, and can grow quickly. Pick an axis along which to compete— and unless you are beating the competition by 5-10x on cost at the device level, don’t pick cost (subject of future post).  Put the blueprints for world domination of every other sector into your “future ventures” file.  And start working!

“Lean” Hardware Startups

Jeff Bussgang of FlyBridge Ventures wrote a good post today about application of “lean startup” concepts to capital-intensive ventures.  It’s something I have been contemplating quite a bit after having started and run two optoelectronic system ventures, and then dipped my toes in the software world.

Obviously there are some significant differences in what gates progress in web/SaaS/software and component/system ventures.  Whereas risk in web start-ups resides mainly in the market (or “product-market fit”), most component ventures are gated by technical risk.  Often market risk can be reduced very quickly (A/B tests, test campaigns, etc.), and progress measured on a daily basis (e.g. conversion rates).  In components, risk reduction cycles (often it’s done using the ur-A/B test, “designed experiments”) may take weeks or months.

As CEO, it’s tough to go to a board meeting and say “the wafers are still in the fab, just like last time,” so you are inclined to make other forms of progress.

The most insidious form of artificial progress is hiring.  At the early stages, it’s usually a small, tight team that is doing all the core risk reduction.  Hiring additional layers before you reach well-defined technology or product gates is a recipe for painful down rounds.  It takes time, it integrates cash burn, and takes an emotional toll when you have high-paid, high-powered applications, systems, and sales people sitting on their hands.  Then you have more cash burn and endure more pain when you pivot the company and have to replace many of those layers.

I like Jeff’s (and FlyBridge’s) model of $500,000 seed money to build a prototype.  But even a Series A in these ventures typically has lots of risk left in it — and spending must be gated according to core development milestones.