Are Transistors Alien
technology? Did we inherit the arrival of the transistor from a crashed
UFO in 1947, or was the transistor developed by our very own people?
This has caused many many long discussions, and still lingers as a part of the
We are grateful for our loyal
poster Lorien who shares some of his thoughts on this subject. He brings
us a detailed timeline of the development of transistors. There may be
technologies out there that have been taken from aliens, But when it comes down
to transistors, It is safe to say, We deserve the credit. Not those Skanky
The time line is simple. It starts from quantum mechanics basics, right up to the modern day transistor. All dates and statements show a steady
production targeted toward the final development of the transistor. I can
not tell you if this information is 100% authentic, and I encourage you to do
your own research into the matter.
Quantum Mechanic Basics
In day to day life, we intuitively
understand how the world works. Drop a glass and it will smash to the floor.
Push a wagon and it will roll along. Walk to a wall and you can’t walk through
it. There are very basic laws of physics going on all around us that we
instinctively grasp: gravity makes things fall to the ground, pushing something
makes it move, two things can’t occupy the same place at the same time.
At the turn of the century, scientists thought that all the basic rules like
this should apply to everything in nature — but then they began to study the
world of the ultra-small. Atoms, electrons, light waves, none of these things
followed the normal rules. As physicists like Niels Bohr and Albert Einstein
began to study particles, they discovered new physics laws that were downright
quirky. These were the laws of quantum mechanics, and they got their name from
the work of Max Planck.
In 1900, Max Planck was a physicist in Berlin studying something called the
“ultraviolet catastrophe.” The problem was the laws of physics
predicted that if you heat up a box in such a way that no light can get out
(known as a “black box”), it should produce an infinite amount of
ultraviolet radiation. In real life no such thing happened: the box radiated
different colors, red, blue, white, just as heated metal does, but there was no
infinite amount of anything. It didn’t make sense. These were laws of physics
that perfectly described how light behaved outside of the box — why didn’t they
accurately describe this black box scenario?
Planck tried a mathematical trick. He presumed that the light wasn’t really a
continuous wave as everyone assumed, but perhaps could exist with only specific
amounts, or “quanta,” of energy. Planck didn’t really believe this was
true about light, in fact he later referred to this math gimmick as “an act
of desperation.” But with this adjustment, the equations worked, accurately
describing the box’s radiation.
It took awhile for everyone to agree on what this meant, but eventually Albert
Einstein interpreted Planck’s equations to mean that light can be thought of as
discrete particles, just like electrons or protons. In 1926, Berkeley physicist
Gilbert Lewis named them photons.
This idea that particles could only contain lumps of energy in certain sizes
moved into other areas of physics as well. Over the next decade, Niels Bohr
pulled it into his description of how an atom worked. He said that electrons
traveling around a nucleus couldn’t have arbitrarily small or arbitrarily large
amounts of energy, they could only have multiples of a standard
“quantum” of energy.
Eventually scientists realized this explained why some materials are conductors
of electricity and some aren’t — since atoms with differing energy electron
orbits conduct electricity differently. This understanding was crucial to
building a transistor, since the crystal at its core is made by mixing materials
with varying amounts of conductivity.
Here’s one of the quirky things about quantum mechanics: just because an
electron or a photon can be thought of as a particle, doesn’t mean they can’t
still be though of as a wave as well. In fact, in a lot of experiments light
acts much more like a wave than like a particle.
This wave nature produces some interesting effects. For example, if an electron
traveling around a nucleus behaves like a wave, then its position at any one
time becomes fuzzy. Instead of being in a concrete point, the electron is
smeared out in space. This smearing means that electrons don’t always travel
quite the way one would expect. Unlike water flowing along in one direction
through a hose, electrons traveling along as electrical current can sometimes
follow weird paths, especially if they’re moving near the surface of a material.
Moreover, electrons acting like a wave can sometimes burrow right through a
barrier. Understanding this odd behavior of electrons was necessary as
scientists tried to control how current flowed through the first transistors.
Scientists interpret quantum mechanics to mean that a tiny piece of material
like a photon or electron is both a particle and a wave. It can be either,
depending on how one looks at it or what kind of an experiment one is doing. In
fact, it might be more accurate to say that photons and electrons are neither a
particle or a wave — they’re undefined up until the very moment someone looks
at them or performs an experiment, thus forcing them to be either a particle or
This comes with other side effects: namely that a number of qualities for
particles aren’t well-defined. For example, there is a theory by Werner
Heisenberg called the Uncertainty Principle. It states that if a researcher
wants to measure the speed and position of a particle, he can’t do both very
accurately. If he measures the speed carefully, then he can’t measure the
position nearly as well. This doesn’t just mean he doesn’t have good enough
measurement tools — it’s more fundamental than that. If the speed is
well-established then there simply does not exist a well-established position
(the electron is smeared out like a wave) and vice versa.
Albert Einstein disliked this idea. When confronted with the notion that the
laws of physics left room for such vagueness he announced: “God does not
play dice with the universe.” Nevertheless, most physicists today accept
the laws of quantum mechanics as an accurate description of the subatomic world.
And certainly it was a thorough understanding of these new laws which helped
Bardeen, Brattain, and Shockley invent the transistor.
Semiconductors had been discovered in the
early 1930s, but not much was known about them. While scientists weren’t sure
how they worked, they were certain that semiconductors were useful.
Semiconductor crystals were used in radio and radar receivers because they could
take in the high-frequency alternating signal of the radio wave and extract the
low frequencies necessary for the headphones. Crystals which did this were known
During World War II, radio and radar were extremely important — and therefore
so were rectifiers. But rectifiers had a problem known as “burn out.”
Sudden bursts of electricity in the wrong direction could destroy them. So one
of the tasks the US government gave scientists during the war was to produce
It was the Purdue University Physics lab, led by Karl Lark-Horovitz, that
managed to make them. One of the graduate students, Seymour Benzer, accidentally
discovered that a crystal of germanium — a semiconductor which was not well
understood at the time — could withstand higher voltage than any current
rectifier. Benzer spent over a year tinkering with germanium until he discovered
that mixing in trace elements of tin could produce rectifiers that were ten
times more resistant than was standard.
Most people who heard about the results wouldn’t believe it until they saw it
with their own eyes. But soon germanium was established as a crucial part of
Point Contact Transistor
The first transistor was about half an
inch high. That’s mammoth by today’s standards, when 7 million transistors can
fit on a single computer chip. It was nevertheless an amazing piece of
technology. It was built by Walter Brattain. Before Brattain started, John
Bardeen told him that they would need two metal contacts within .002 inches of
each other — about the thickness of a sheet of paper. But the finest wires then
were almost three times that width and couldn’t provide the kind of precision
Instead of bothering with tiny wires, Brattain attached a single strip of gold
foil over the point of a plastic triangle. With a razor blade, he sliced through
the gold right at the tip of the triangle. Voila: two gold contacts just a
The whole triangle was then held over a crystal of germanium on a spring, so
that the contacts lightly touched the surface. The germanium itself sat on a
metal plate attached to a voltage source. This contraption was the very first
semiconductor amplifier, because when a bit of current came through one of the
gold contacts, another even stronger current came out the other contact.
Here’s why it worked: Germanium is a semiconductor and, if properly treated, can
either let lots of current through or let none through. This germanium had an
excess of electrons, but when an electric signal traveled in through the gold
foil, it injected holes (the opposite of electrons) into the surface. This
created a thin layer along the top of the germanium with too few electrons.
Semiconductors with too many electrons are known as N-type and semiconductors
with too few electrons are known as P-type. The boundary between these two kinds
of semiconductors is known as a P-N junction, and it’s a crucial part of a
transistor. In the presence of this junction, current can start to flow from one
side to the other. In the case of Brattain’s transistor, current flowed towards
the second gold contact.
Think about what that means. A small current in through one contact changes the
nature of the semiconductor so that a larger, separate current starts flowing
across the germanium and out the second contact. A little current can alter the
flow of a much bigger one, effectively amplifying it.
Of course, a transistor in a telephone or in a radio has to handle complex
signals. The output contact can’t just amplify a steady hum of current, it has
to dutifully replicate a person’s voice, or an entire symphony. Luckily, a
semiconductor is perfectly suited to this job. It is exquisitely sensitive to
how many extra or missing electrons are inside. Each time the input signal
shoves more holes into the germanium, it changes the way current flows across
the crystal — the output current instantly gets larger and smaller, perfectly
mimicking the input.
The Silicon P-N Junction
In 1939, vacuum tubes were state of the
art in radio equipment. People had previously used crystals for radios, but the
crystals were so maddeningly inconsistent and mysterious it was a wonder they
worked at all. Vacuum tubes were simple, and they worked. Most scientists agreed
tubes were the future for radio and telephones everywhere.
Russell Ohl didn’t agree. He kept right on studying crystals, occasionally
having to fight Bell Labs administration to let him do it. Ohl thought silicon
crystals’ erratic behavior was due to impurities in the crystal, not any problem
in the silicon itself. He thought that if he could purify silicon enough, the
crystals just might provide the improved radio broadcasting capabilities for
which everyone was looking.
Much of his research in 1939 was devoted to producing ultra-pure crystals. As he
expected, his purified silicon crystals– now 99.8 percent pure — were much
more consistent. They worked the way a rectifier should, allowing current to
flow in one direction and not the other. At least, most of them worked. On
February 23, Ohl sat down to examine a particularly curious crystal that was as
quirky as the cat’s whisker crystals of old.
The crystal had a crack down the middle. Ohl was examining how much current
flowed through one side of the crack versus the other, when he noticed something
peculiar. The amount of current changed when the crystal was held over a bowl of
water. And a hot soldering iron. And an incandescent lamp on the desk in the
By early afternoon, Ohl realized that it was in fact light shining on the
crystal that caused this small current to begin trickling through it. On March
6, he showed his prize silicon rod to Mervin Kelly. Kelly quickly called Walter
Brattain and Joseph Becker to the scene.
Ohl had his coal-black crystal attached to a voltmeter in front of him. He
turned on a flashlight, aimed it at the silicon, and the voltage instantly
jumped up to half a volt. This was ten times anything Brattain had ever seen
before. He was stunned, but not too stunned to produce an off-the-cuff
explanation. The electrical current must be due to some barrier being formed
right at the crack in the crystal.
With more research, what was going on became clear: the crystal had different
levels of purity on either side of the crack. Due to the subtle traces of extra
elements, one side had an excess of electrons, and the other side a deficit.
Since opposites attract, the electrons from one side had rushed over to the
other — but they went only so far, creating a thin barrier of excess charges
right at the central crack. That barrier created a one way street — electrons
could now only travel in one direction across it.
When Ohl shined light on the rod, energy from the light kicked sluggish
electrons out of their resting places and gave them the boost they needed to
travel around the crystal. But due to the barrier, there was only one way they
could travel. All those electrons moving in a single direction became an
electric current. Ohl’s crystal was the ancestor of modern day solar cells,
which take energy from the sun and convert it into electricity. But for Bell
Labs on that day, it opened up the idea that crystals might be just the thing
needed to replace vacuum tubes.
The First Silicon
It was late afternoon at a conference for
the Institute of Radio Engineers. Many people giving talks had complained about
the current germanium transistors — they had a bad habit of not working at high
temperatures. Silicon, since it’s right above germanium on the periodic table
and has similar properties, might make a better gadget. But, they said, no one
should expect a silicon transistor for years.
Then Gordon Teal of Texas Instruments stood up to give his talk. He pulled three
small objects out of his pocket and announced: “Contrary to what my
colleagues have told you about the bleak prospects for silicon transistors, I
happen to have a few of them here in my pocket.”
That moment catapulted TI from a small start-up electronics company into a major
player. They were the first company to produce silicon transistors — and
consequently the first company to produce a truly consistent mass-produced
Scientists knew about the problems with germanium transistors. Germanium worked,
but it had its mood swings. When the germanium heated up — a natural outcome of
being part of an electrical circuit — the transistor would have too many free
electrons. Since a transistor only works because it has a specific, limited
amount of electrons running around, high heat could stop a transistor from
While still working at Bell Labs in 1950, Teal began growing silicon crystals to
see if they might work better. But just as it had taken years to produce pure
enough germanium, it took several years to produce pure enough silicon. By the
time he succeeded, Teal was working at Texas Instruments. Luring someone as
knowledgeable about crystals as Teal away from Bell proved to be one of the most
important things TI ever did.
On April 14, 1954, Gordon Teal showed TI’s Vice President, Pat Haggerty, a
working silicon transistor. Haggerty knew if they could be the first to sell
these new transistors, they’d have it made. The company jumped into action —
four weeks later when Teal told his colleagues about the silicon transistors in
his pocket, TI had already started production.
A Working Junction Transistor
There was no doubt about it, point-contact
transistors were fidgety. The transistors being made by Bell just didn’t work
the same way twice, and on top of that, they were noisy. While one lab at Bell
was trying to improve those first type-A transistors, William Shockley was
working on a whole different design that would eventually get rid of these
Early in 1948, Shockley conceived of a transistor that looked like a sandwich,
with two layers of one type of semiconductor surrounding a second kind. This was
a completely different setup which didn’t have the shaky wires that made the
point-contact transistors so hard to control.
A working sandwich transistor would require that electricity travel straight
across a crystal instead of around the surface. But Bardeen’s theory about how
the point-contact transistor worked said that electricity could only travel
around the outside of a semiconductor crystal. In February of 1948, some
tentative results in the Shockley lab suggested this might not be true. So the
first thing Shockley had to do was determine just what was going on.
Careful experiments led by a physicist in the group, Richard Haynes, helped.
Haynes put electrodes on both sides of a thin germanium crystal and took very
sensitive measurements of the size and speed of the current. Electricity
definitely flowed straight through the crystal. That meant Shockley’s vision of
a new kind of transistor was theoretically possible.
But Haynes also discovered that the layer in the middle of the sandwich had to
be very thin and very pure.
The man who paved the way for growing the best crystals was Gordon Teal. He
didn’t work in Shockley’s group, but he kept tabs on what was going on. He’d
even been asked to provide crystals for the Solid State team upon occasion. Teal
thought transistors should be built from a single crystal-as opposed to cutting
a sliver from a larger ingot of many crystals. The boundaries between all the
little crystals caused ruts that scattered the current, and Teal had heard of a
way to build a large single crystal which wouldn’t have all those crags. The
method was to take a tiny seed crystal and dip it into the melted germanium.
This was then pulled out ever so slowly, as a crystal formed like an icicle
below the seed.
Teal knew how to do it, but no one was interested. A number of institutions at
the time, Bell included, had a bad habit of not trusting techniques that hadn’t
been devised at home. Shockley didn’t think these single crystals were necessary
at all. Jack Morton, head of the transistor-production group, said Teal should
go ahead with the research, but didn’t throw much support his way.
Luckily, Teal did continue the research, working with engineer John Little.
Three months later, in March of 1949, Shockley had to admit he’d been wrong.
Current flowing across Teal’s semiconductors could last up to one hundred times
longer than it had in the old cut crystals.
Nice crystals are all well and good, but a sandwich transistor needed a sandwich
crystal. The outer layers had to be a semiconductor with either too many
electrons (known as N-type) or too few (known as P-type), while the inner layer
was the opposite. Under Shockley’s prodding, Teal and Morgan Sparks began adding
impurities to the melt while they pulled the crystal out of the melt. Adding
impurities is known as “doping,” and it’s how one turns a
semiconductor into N- or P-type.
As they pulled the seed crystal out of an N-type germanium melt, they quickly
added some gallium to turn the melt into P-type. As a layer of P-type formed on
the ever-lengthening crystal, they added antimony, which compensated for the
gallium and turned the melt back into N-type. Once the process was done, there
was a single, thin crystal formed into a perfect sandwich.
By etching away the surface of the outside layers, Sparks and Teal left a tiny
bit of P-type crystal protruding. To this they attached a fine
electrode-creating a circuit the way Shockley had envisioned. On April 12, 1950,
they tested what they had built. Without a doubt, more current came out of the
sandwich than went in. It was a working amplifier.
The first junction transistor had been born. But It Wasn’t a Very Good One . . .
This transistor could amplify electrical signals, but not particularly
complicated ones. If the signal changed rapidly, as a voice coming over a phone
line does, the transistor couldn’t keep up and would garble the output. The
problem lay in the middle of the sandwich: it was too easy for electric current
to spread out and become unfocused as it crossed the P-type layer. To solve the
problem, the layer had to be even thinner.
In January of 1951, Morgan Sparks figured out a way to accomplish that. By
pulling the crystal out more slowly than ever, while constantly stirring the
melt, he managed to get the middle layer of the sandwich thinner than a sheet of
This new, improved sandwich did all that the researchers hoped. They still
weren’t up to the point-contact transistor’s ability to handle signals that
fluctuated extremely rapidly, but in every other way they were superior. They
were much more efficient, used very little power to work, and they were so much
quieter that they could handle weaker signals than the type-A transistors ever
In July of 1951, Bell held another press conference — this time announcing the
invention of a working and efficient junction transistor.
In 1945, Shockley had an idea for making a
solid state device out of semiconductors. He reasoned that a strong electrical
field could cause the flow of electricity within a nearby semiconductor. He
tried to build one, then had Walter Brattain try to build it, but it didn’t
Three years later, Brattain and Bardeen built the first working transistor, the
germanium point-contact transistor, which was manufactured as the “A”
series. Shockley then designed the junction (sandwich) transistor, which was
manufactured for several years afterwards. But in 1960 Bell scientist John
Atalla developed a new design based on Shockley’s original field-effect
theories. By the late 1960s, manufacturers converted from junction type
integrated circuits to field effect devices. Today, most transistors are
field-effect transistors. You are using millions of them now.
Most of today’s transistors are “MOS-FETs”, or Metal Oxide
Semiconductor Field Effect Transistors. They were developed mainly by Bell Labs,
Fairchild Semiconductor, and hundreds of Silicon Valley, Japanese and other
Field-effect transistors are so named because a weak electrical signal coming in
through one electrode creates an electrical field through the rest of the
transistor. This field flips from positive to negative when the incoming signal
does, and controls a second current traveling through the rest of the
transistor. The field modulates the second current to mimic the first one — but
it can be substantially larger.
On the bottom of the transistor is a U-shaped section (though it’s flatter than
a true “U”) of N-type semiconductor with an excess of electrons. In
the center of the U is a section known as the “base” made of P-type
(positively charged) semiconductor with too few electrons. (Actually, the N- and
P-types can be reversed and the device will work in exactly the same way, except
that holes, not electrons, would cause the current.)
Three electrodes are attached to the top of this semiconductor crystal: one to
the middle positive section and one to each arm of the U. By applying a voltage
to the electrodes on the U, current will flow through it. The side where the
electrons come in is known as the source, and the side where the electrons come
out is called the drain.
If nothing else happens, current will flow from one side to the other. Due to
the way electrons behave at the junction between N- and P-type semiconductors,
however, the current won’t flow particularly close to the base. It travels only
through a thin channel down the middle of the U.
There’s also an electrode attached to the base, a wedge of P-type semiconductor
in the middle, separated from the rest of the transistor by a thin layer of
metal-oxide such as silicon dioxide (which plays the role of an insulator). This
electrode is called the “gate.” The weak electrical signal we’d like
to amplify is fed through the gate. If the charge coming through the gate is
negative, it adds more electrons to the base. Since electrons repel each other,
the electrons in the U move as far away from the base as possible. This creates
a depletion zone around the base , a whole area where electrons cannot travel.
The channel down the middle of the U through which current can flow becomes even
thinner. Add enough negative charge to the base and the channel will pinch off
completely, stopping all current. It’s like stepping on a garden hose to stop
the flow of water. (Earlier transistors controlled this depletion zone by making
use of how electrons move when two semiconductor slabs are put next to each
other, creating what is known as a P-N junction. In a MOS-FET, the P-N junction
is replaced with metal-oxide, which turned out to be easier to mass produce in
Now imagine if the charge coming through the gate is positive. The positive base
attracts many electrons , suddenly the area around the base which used to be a
no-man’s-land opens up. The channel for current through the U becomes larger
than it was originally and much more electricity can flow through.
Alternating charge on the base, therefore, changes how much current goes through
the U. The incoming current can be used as a faucet to turn current on or off as
it moves through the rest of the transistor.
On the other hand, the transistor can be used in a more complex manner as well
— as an amplifier. Current traveling through the U gets larger or smaller in
perfect synch with the charge coming into the base, meaning it has the identical
pattern as that original weak signal. And, since the second current is connected
to a different voltage supply, it can be made to be larger. The current coming
through the U is a perfect replica of the original, only amplified. The
transistor is used this way for stereo amplification in speakers and
microphones, as well as to boost telephone signals as they travel around the