Quantum/molecular computing

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
Molecular computing is extremely fascinating. But the more fascinating it becomes, the more confusing it becomes. I guess i can best state this in the words of Neils Bohr, "If you are not confused by quantum physics, you haven't really understood it." This is just the way I feel: :confused: (and please do read on, it's not really that long, and it's not just me blabbing about random things, I eventually get to the point :D)

In the recent popular science I read, it said that IBM is working on a CPU that uses carbon monoxide transistors. This got me thinking, I had recently finished the chapter in logic in my math class, which was extremely fascinating. And it got me thinking of how a computer actually works. Silicon makes a very good transistor, and I understand that companies are attempting to make transistors utilizing molecules instead of silicon. If memory serves me, they have created a transistor using a Sulfur, Carbon, and Hydrogen bond... a thiol molecule as a transistor. As well, companies have been wanting to use carbon for computing, which IBM has just completed.

But I have also seen the beauty of quantum computing. The fact that the nuclei of atoms can represent the binary functions necessary for computing, and can also represent both binary codes at once is very fascinating. As if being in two positions at once. But if [again], memory serves me, this didn't work out as they had planned, as the nuclei could not sustain this position long enough and would freeze into either a 0 or a 1. Fascinating, too bad it failed. But is there any chance this will succeed? Are companies still working on this?

But then there is DNA as a computer. This seems farfetched, but makes sense... I guess. Considering how much "data" storage a double-strand of DNA can hold, it makes sense that they would attempt to utilize it for the purposes of computing. However, DNA computing doesn't seem likely as DNA is error-ridden, correct?

What is fascinating with these two forms of computing is it's non-sequencial way of completing tasks. Current computing cannot multi-task and works in steps to complete processes, but quantum and DNA computing can do parallel processing, which, if companies can make these two forms successful, would greatly increase processor speed.

Now, that's about all [I think] I understand. But now it becomes blurry and I don't exactly grasp how it works. I do, but I kind of don't. I know processors work in binary, but... how? How can a molecule/silicon transfer information along a transistor and then complete the logic gate? How does a molecule of Carbon Monoxide, or the thiol molecule transfer along bits of information along the transistor, transfer it into binary code...?

Then comes the point of IPC. I've always just accepted the fact that speed = frequency x IPC. Just accepted it as true, but now it makes no sense. The faster the electrons pass through the transistors, the higher the frequency, and the faster the processor will perform. The smaller the circuits become, the faster it can travel making frequency more. So, what exactly is IPC? I really don't know a better way to put it than that. I mean, what is it... really? And where does it come into play in this picture?

Very cool stuff, but confusing to say the least. And when will we see these molecular chips enter the market? Will they only be used for computers, or will PDAs/Cell phones take advantage of them? I understand it is cheaper to make molecular transistors than silicon transistors, so it makes sense that all electronic components will utilize molecular computing...

Thanks HT crew,

~Aunix
 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
Oh my lord there is a lot to answer here... lets start small.

Now, that's about all [I think] I understand. But now it becomes blurry and I don't exactly grasp how it works. I do, but I kind of don't. I know processors work in binary, but... how? How can a molecule/silicon transfer information along a transistor and then complete the logic gate? How does a molecule of Carbon Monoxide, or the thiol molecule transfer along bits of information along the transistor, transfer it into binary code...?

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Now, you say, "how does that help, how can you create logic from just a switch???". Well the real magic behind all of this is when you string more than one of them together. Let's look at the operation of a basic logic gate: (for this example, we will assume you have 2 binary values, A and B)

AND gate operation
_A_B_|_Output
off off | off
off on | off
on off | off
on on | on

Now, how would we make a set of transistors to emulate this operation? Well, we could use our 2 signals A and B as the "gate" signals on 2 transistors, and then string them together input-output-input-output. This would cause the resulting output of the second transistor to be asserted whenever both A and B were asserted *poof*.. you have a logic gate. Now, granted, there is a lot more that goes into the making of a complex processor, but a large majority of the chip can all be boiled down to logic gates, like the one we just created. Of course, there are more instructions then just an AND gate, but you can easily extrapolate what an OR gate would look like, etc.

Now, when you start getting into molecular computing, this is where it becomes a little more hazy. Basically, for the immediate future, molecular computing will be largely an academic excersize. The large scale fabrication of a molecular transistor is far from being implemented. Basically, on a large scale view, it is going to be the same kinds of operations as a silicon transistor, but on a much smaller scale. I won't really go into much here, but if you want more info there is plenty of it out there to look at. Basically, instead of using doped silicon, you are using individual atoms that are passing individual electrons around to achieve the same kinds of operations. The problem arises when they try to connect these gates together, and there is a lot of research being done as we speak to alleviate some the the inherant issues with this.

So, what exactly is IPC? I really don't know a better way to put it than that. I mean, what is it... really? And where does it come into play in this picture?

IPC (Instructions per Cycle) is basically a measure of how efficient a processor is. A modern processor takes many clock cycles to perform a single operation. For instance, if you want to divide two binary values, the processor may take 30 clock cycles to complete this operation. This is because a divide operation must go through several steps (remember long division in school? yea, its something like that). Now, a processor manufacturer may spend money researching a faster way to do a divide (no, not faster actual math, they may have an engineer spend his time trying to come up with a group of transistors that will speed the process along). So let's say that company XYZ has a special purpose processor. This processor does an add/subtract instruction in 2 cycles, a multiply in 24 cycles and a divide in 45 cycles. They put this processor through testing and find out that during normal operation in its intended environment, it will be doing 45% multiply instructions. That would mean that a good portion of processing time on this processor will be spent doing multiplies. So, if this company wanted to improve the design of thier processor, they would first improve the multiply instruction's design to push it to, say 20 cycles.

<takes a big breath>

So basically IPC, along with the clock speed will give you an adequate measure of the overall productivity of a processor. You know how the AMD processors are, say Athlon 1700+, but only run at 1.47 GHz? That is because Intel decided that IPC was not as important as clock speed, and decided to pump up the frequency without really improving the overall design (roughly, the IPC). AMD's sales kind of slumped off, because that was a good marketing move for Intel. Basically AMD came back with this little name game to say that thier 1.47GHz processor has equivalent overall processing power to a pentium @ 1.7GHz.

Ugh, time to go home from work. Hope this at least gets you started... ;)
 

sash1

Diamond Member
Jul 20, 2001
8,896
1
0
"AND gate operation
_A_B_|_Output
off off | off
off on | off
on off | off
on on | on

Now, how would we make a set of transistors to emulate this operation? Well, we could use our 2 signals A and B as the "gate" signals on 2 transistors, and then string them together input-output-input-output. This would cause the resulting output of the second transistor to be asserted whenever both A and B were asserted *poof*.. you have a logic gate. Now, granted, there is a lot more that goes into the making of a complex processor, but a large majority of the chip can all be boiled down to logic gates, like the one we just created. Of course, there are more instructions then just an AND gate, but you can easily extrapolate what an OR gate would look like, etc."


Wow, it's amazing how much that helped. That just kidna helped put into perspective the logic that i learned in math. So the stuff I was doing (DNF and SOPs), basically, thats what consists of a logic gate? What does the end operationg look like?

And what does having a smaller gate (minimum SOP) help? Same result, less steps, so it makes it faster, right?

ex:

SOP: a¹b¹ + ab¹c¹ + b¹c + a¹bc
-will equal
Minimum SOP: b¹ + a¹c

So, by making that gate significantly smaller, how much more output are you producing? Is that what this is all about?

"IPC (Instructions per Cycle) is basically a measure of how efficient a processor is. A modern processor takes many clock cycles to perform a single operation. For instance, if you want to divide two binary values, the processor may take 30 clock cycles to complete this operation. This is because a divide operation must go through several steps (remember long division in school? yea, its something like that). Now, a processor manufacturer may spend money researching a faster way to do a divide (no, not faster actual math, they may have an engineer spend his time trying to come up with a group of transistors that will speed the process along). So let's say that company XYZ has a special purpose processor. This processor does an add/subtract instruction in 2 cycles, a multiply in 24 cycles and a divide in 45 cycles. They put this processor through testing and find out that during normal operation in its intended environment, it will be doing 45% multiply instructions. That would mean that a good portion of processing time on this processor will be spent doing multiplies. So, if this company wanted to improve the design of thier processor, they would first improve the multiply instruction's design to push it to, say 20 cycles."

Ahh! Good example too :)

Thanks a lot. I think I will search into this more. As well, I have an uncle that actually worked for IBM (he just retired), think I'll talk to him as well.

Thanks a lot,

~Aunix
 

kevinthenerd

Platinum Member
Jun 27, 2002
2,908
0
76
Originally posted by: AbsolutDealage

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Correction: A single transistor does not act like a switch. In computers, transistors act like switches, yes, but here's how:

A single bit of memory in a computer is actually a pair of two transistors acting together to store a single bit. They're configured as two amplifiers with a gain of zero, and their inputs are linked to their outputs to form a loop: output to input, output to input. Within the loop, they have an input and an output, which, electronically, are the same.

I need an electronic engineer to help me explain the rest and correct any mistakes from here on in the process of data storage and retrieval.
 

everman

Lifer
Nov 5, 2002
11,288
1
0
Isn't one of the problems with Quantum Computers that by seeing the output, you also destroy it. Or something like that?

On a somewhat related note, I've wondered if quantum entanglement could be used within a processor to transfer data at faster than light speeds.... That is in simple terms, when something happens to object A, the same thing will happen to object B at the same time no matter where B is in relation to A, the origina object A is destroyed in the process I think.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: AbsolutDealage


So, what exactly is IPC? I really don't know a better way to put it than that. I mean, what is it... really? And where does it come into play in this picture?

IPC (Instructions per Cycle) is basically a measure of how efficient a processor is. A modern processor takes many clock cycles to perform a single operation. For instance, if you want to divide two binary values, the processor may take 30 clock cycles to complete this operation. This is because a divide operation must go through several steps (remember long division in school? yea, its something like that). Now, a processor manufacturer may spend money researching a faster way to do a divide (no, not faster actual math, they may have an engineer spend his time trying to come up with a group of transistors that will speed the process along). So let's say that company XYZ has a special purpose processor. This processor does an add/subtract instruction in 2 cycles, a multiply in 24 cycles and a divide in 45 cycles. They put this processor through testing and find out that during normal operation in its intended environment, it will be doing 45% multiply instructions. That would mean that a good portion of processing time on this processor will be spent doing multiplies. So, if this company wanted to improve the design of thier processor, they would first improve the multiply instruction's design to push it to, say 20 cycles.

<takes a big breath>

So basically IPC, along with the clock speed will give you an adequate measure of the overall productivity of a processor. You know how the AMD processors are, say Athlon 1700+, but only run at 1.47 GHz? That is because Intel decided that IPC was not as important as clock speed, and decided to pump up the frequency without really improving the overall design (roughly, the IPC). AMD's sales kind of slumped off, because that was a good marketing move for Intel. Basically AMD came back with this little name game to say that thier 1.47GHz processor has equivalent overall processing power to a pentium @ 1.7GHz.

Ugh, time to go home from work. Hope this at least gets you started... ;)

Other than your little bit about why AMD/Intel use the designs they use, it's a good answer ;)

Lets take A VERY simple processor. It is just a programmable calculator - instructions available are add a, b, c and subtract a, b, c. (a, b, c are numbers in memory. no way to load these numbers from constants ;)). One way to do it would be to do the following all in one clock cycle:

1. read the instruction and figure out what we're going to do
2. read memory location a
3. read memory location b
4. perform the add or subtract
5. write the result to location c

With this setup, the IPC is exactly 1, because one instruction takes one (VERY long) clock cycle. Now, let's improve this design. We're going to have 5 clock cycles per instruction, and each doing one of the 5 things above. So, on cycle 1, we decide what to do, on cycle 2, we read a, on cycle 3, we read b, and so on. Note that the IPC will be 1/5th. The thing you have to remember is, ideally each of those steps takes 1/5th of the time, so the end result is the SAME performance.

A more advanced implementation is a pipelined processor - multicycle like the one described, but we do more than one thing at a time:
1. read instruction i
2. read a (for instruction i), and read instruction ii
3. read b (for instruction i), a (for instruction ii), and instruction iii
4. do the op for instruction i, read b for instruction ii, read a for instruction iii, and read instruction iv
5. write c for instruction i, operate for ii, read b for iii, read a for iv, and read the instruction v
6. store c for ii, operate for iii, read b for iv, read a for v, and read vi

(note that this requires the ability to do 3 or 4 memory accesses in a cycle, which I didn't have in the other 2, but for the sake of understanding the concepts this can be ignored)

A picture would really help, but I don't have one offhand. To see how this performs, note that a given instruction takes 5 cycles from start to finish, but at any time, multiple instructions are being processed. Also, every single cycle, one instruction is completed (well, from the 5th cycle forward). So, the IPC is 1, even though each individual instruction takes a bunch of cycles, and the actual performance of the machine is 5 times the performance of the original, since the clock is 5 times faster.

Now, a modern processor is MUCH more advanced than this - there are multiple pipelines working on multiple instructions, instructions are executed out of order, etc., so you can't just do a simple analysis like this to see how an Athlon will perform vs. a P4. In general, a longer pipeline lets you do less in each stage, so you can clock the design faster. The P4's 20 stage pipeline lets it run at up to 3ghz currently, whereas the shorter pipeline of the Athlon results in more work per clock, and therefore a slower max clock speed. (There are other factors that play into this - Intel may have better chip fabs than AMD, so their transistors are better, and the P4 and Athlon both execute instructions VERY VERY differently, but again that is beyond the scope of this course - other than the fact that transistor speed doesn't really affect IPC, just the clock rate. I'll shut up before I go over my own head. ;))

Fanboys will often say that AMD is "more efficient" because more work is done per clock, but the goal of just high IPC is stupid if you look at the above examples - you have to consider how fast you can clock the machine as well. A good processor is a fast processor - if Via came out with a chip powered by sewage running through pipes that gave you 6000FPS in Doom III, does it really matter that the implementation is ugly and smelly?

Originally posted by: kevinthenerd
Originally posted by: AbsolutDealage

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Correction: A single transistor does not act like a switch. In computers, transistors act like switches, yes, but here's how:

A single bit of memory in a computer is actually a pair of two transistors acting together to store a single bit. They're configured as two amplifiers with a gain of zero, and their inputs are linked to their outputs to form a loop: output to input, output to input. Within the loop, they have an input and an output, which, electronically, are the same.

I need an electronic engineer to help me explain the rest and correct any mistakes from here on in the process of data storage and retrieval.

No, a transistor does act like a switch. There are two types of switches - voltage on the gate to turn on, and voltage on the gate to turn off, but AbsolutDealage is correct.

To the best of my knowledge you can't build memory with two transistors. The simplest way to store a bit with transistors is to take two NOT gates (see crude drawing - the top part is how you make a CMOS NOT gate, the top is the layout for a bit of memory). With this crude design, to write to the bit, you have to overpower the bottom NOT gate, which will try to keep storing the old value, but it can be done. Then to read, you just look at the input/output wire.

... and now I too must do homework ;)

edit:
Originally posted by: everman
Isn't one of the problems with Quantum Computers that by seeing the output, you also destroy it. Or something like that?

DRAM (nothing like the memory described above, DRAM uses capacitors) is also destroyed when you read it. The charge stored is tiny, and all of it is used up when you read it. To compensate for this, there is circuitry that takes the value that was read and writes it back. I think something makes it more complicated in the quantum situation though.

 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
Originally posted by: kevinthenerd
Originally posted by: AbsolutDealage

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Correction: A single transistor does not act like a switch. In computers, transistors act like switches, yes, but here's how:

A single bit of memory in a computer is actually a pair of two transistors acting together to store a single bit. They're configured as two amplifiers with a gain of zero, and their inputs are linked to their outputs to form a loop: output to input, output to input. Within the loop, they have an input and an output, which, electronically, are the same.

I need an electronic engineer to help me explain the rest and correct any mistakes from here on in the process of data storage and retrieval.

Ummm... yea. First of all, you are talking to an electrical engineer, second of all they do act like switches.... unless I've been living a lie for the past 5 years or so ;)

Edit:

No, a transistor does act like a switch. There are two types of switches - voltage on the gate to turn on, and voltage on the gate to turn off, but AbsolutDealage is correct.

Oh I didnt see ur post there. Thx for the backup ;)
And about the 2 types, I just didn't want to get into the whole p/n transistor thing for such a simplified explanation ;)
 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
Other than your little bit about why AMD/Intel use the designs they use, it's a good answer ;)

While I merely hinted at this before, this is in actuality true. Intel spends an incredible amount of money on advertising and market research. They basically had to create the model and market forecast system for the consumer-driven processor market.

All of their research in the past 5 or 6 years has shown that when an "average" user is basically not technically inclined and has no intense knowledge about computers, nor any desire to gain that knowledge. Basically they concluded that Joe Consumer going into best buy would probably give a rat's @$$ about IPC, overall efficiency, etc. The guy would see the 10 systems lined up on the shelf, and basically break them down by one number: clock speed. Intel realized this, and they pretty much gave up on making the individual functional units more efficient, and decided to ramp up the clock speed. We actually took a week in my computer architecture class talking about this very thing. I'm not saying that Intel has not improved anything in thier processor besides the clock... they have made significant performance changes in these last couple of core designs. I am saying that Intel has focused more on increasing the clock than on increasing thier IPC (or other performance metrics for that matter).

Don't get me wrong or anything, I am an Intel fan. I run a couple Intel systems as well as a couple of AMD systems... I don't hate. I just acknowledge that Intel spends more money and does more market research, and they are driven by that. AMD has largely followed in Intel's footsteps as far as marketing is concerned (this, as I said before, has led to the whole Athlon XP XXXX+ fiasco).
 

kevinthenerd

Platinum Member
Jun 27, 2002
2,908
0
76
Originally posted by: AbsolutDealage
Originally posted by: kevinthenerd
Originally posted by: AbsolutDealage

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Correction: A single transistor does not act like a switch. In computers, transistors act like switches, yes, but here's how:

A single bit of memory in a computer is actually a pair of two transistors acting together to store a single bit. They're configured as two amplifiers with a gain of zero, and their inputs are linked to their outputs to form a loop: output to input, output to input. Within the loop, they have an input and an output, which, electronically, are the same.

I need an electronic engineer to help me explain the rest and correct any mistakes from here on in the process of data storage and retrieval.

Ummm... yea. First of all, you are talking to an electrical engineer, second of all they do act like switches.... unless I've been living a lie for the past 5 years or so ;)

Edit:

No, a transistor does act like a switch. There are two types of switches - voltage on the gate to turn on, and voltage on the gate to turn off, but AbsolutDealage is correct.

Oh I didnt see ur post there. Thx for the backup ;)
And about the 2 types, I just didn't want to get into the whole p/n transistor thing for such a simplified explanation ;)

Yeah, I guess you're right, but let's get something straight. When most people think of a switch they think of something that can be set and forgotten. A transistor needs electricity to keep the switch set, no?. RAM is deleted when the power is off, but I hope MagRAM by Micromem Technologies can fix that. I wouldn't explain it with the term "switch" so much as "relay."

But is a NOT gate similar at all to a class D amplifier? If not, I'm misinformed. I remember what I was told, but now I'm finding out that the one who explained all of this was full of sh.....


I have a few questions of my own...

Does most standard RAM use BJT's (bi-polar junction), MOSFET's (metal oxide semiconducting field effect), or some other type?
Does most standard RAM use NPN or PNP transistors? I heard somewhere that NPN's are easier to make and slightly cheaper, but that was the same source that supposedly told me how RAM works.

 

AbsolutDealage

Platinum Member
Dec 20, 2002
2,675
0
0
Yeah, I guess you're right, but let's get something straight. When most people think of a switch they think of something that can be set and forgotten. A transistor needs electricity to keep the switch set, no?. RAM is deleted when the power is off, but I hope MagRAM by Micromem Technologies can fix that. I wouldn't explain it with the term "switch" so much as "relay."

True, a transistor can better be compared to a relay... but most people are not aquainted with the operation of a relay, so normally we use the analogy of a switch. And no, a transistor does not need power in order to keep functioning. If the gate is disconnected, it will simply be set to only pass or only block the charge going across its terminals.

But is a NOT gate similar at all to a class D amplifier? If not, I'm misinformed. I remember what I was told, but now I'm finding out that the one who explained all of this was full of sh.....

Well, the only similarity here lies in the fact that the active transistor in a class D amplifier operates in the same mode (as a switch, essentially). However, the amplifier works with signals completely differently.

I have a few questions of my own...

Does most standard RAM use BJT's (bi-polar junction), MOSFET's (metal oxide semiconducting field effect), or some other type?
Does most standard RAM use NPN or PNP transistors? I heard somewhere that NPN's are easier to make and slightly cheaper, but that was the same source that supposedly told me how RAM works.

RAM uses MOSFETs for all of its control logic, it would be simply enourmous if it tried to use BJTs. However, RAM (in the case of most DRAM implementations) uses special mos capacitors that are entirely different than a regular transistor (usually that is... there are implementations for 1T cells that use standard MOSFETS).

 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: kevinthenerd
Originally posted by: AbsolutDealage
Originally posted by: kevinthenerd
Originally posted by: AbsolutDealage

Throwing out the bit on molecular computing (for now), let's just talk about the transistor. The transistor (when working in a digital logic capacity) is basically an electronic switch. To look at it on a basic level, there is an input, an output, and a "gate". If the "gate" is open, it will allow charge to flow from the input to the output. If it is closed, no information will pass.

Correction: A single transistor does not act like a switch. In computers, transistors act like switches, yes, but here's how:

A single bit of memory in a computer is actually a pair of two transistors acting together to store a single bit. They're configured as two amplifiers with a gain of zero, and their inputs are linked to their outputs to form a loop: output to input, output to input. Within the loop, they have an input and an output, which, electronically, are the same.

I need an electronic engineer to help me explain the rest and correct any mistakes from here on in the process of data storage and retrieval.

Ummm... yea. First of all, you are talking to an electrical engineer, second of all they do act like switches.... unless I've been living a lie for the past 5 years or so ;)

Edit:

No, a transistor does act like a switch. There are two types of switches - voltage on the gate to turn on, and voltage on the gate to turn off, but AbsolutDealage is correct.

Oh I didnt see ur post there. Thx for the backup ;)
And about the 2 types, I just didn't want to get into the whole p/n transistor thing for such a simplified explanation ;)

Yeah, I guess you're right, but let's get something straight. When most people think of a switch they think of something that can be set and forgotten. A transistor needs electricity to keep the switch set, no?. RAM is deleted when the power is off, but I hope MagRAM by Micromem Technologies can fix that. I wouldn't explain it with the term "switch" so much as "relay."

But is a NOT gate similar at all to a class D amplifier? If not, I'm misinformed. I remember what I was told, but now I'm finding out that the one who explained all of this was full of sh.....


I have a few questions of my own...

Does most standard RAM use BJT's (bi-polar junction), MOSFET's (metal oxide semiconducting field effect), or some other type?
Does most standard RAM use NPN or PNP transistors? I heard somewhere that NPN's are easier to make and slightly cheaper, but that was the same source that supposedly told me how RAM works.

Granted. It is a spring-loaded switch.
I'm more of a CE than EE, I dont know squat about amps ;)
IIRC, BJTs aren't used and haven't been used for anything since the 486 or so (I am likely completely wrong - maybe BJTs were never used for this). I'm sure pm can give you the correct answer there, but he's on an extended vacation (real world, not anandtech ;)).
There are two types of RAM: SRAM and DRAM. SRAM is (MOSFET) transistors only, DRAM usually has one transistor and one capacitor per bit.

I'm not sure about what is used in dram, but nmos FETs are significantly faster than pmos in silicon. However, CMOS means compilmentary - both n- and p-fets are used (see my inverter drawing above) to pass both 1s and 0s well.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
Originally posted by: everman
Isn't one of the problems with Quantum Computers that by seeing the output, you also destroy it. Or something like that? On a somewhat related note, I've wondered if quantum entanglement could be used within a processor to transfer data at faster than light speeds.... That is in simple terms, when something happens to object A, the same thing will happen to object B at the same time no matter where B is in relation to A, the origina object A is destroyed in the process I think.

Quantum computers use entaglement to perform "massive parallell computing", it is true that by measuring you destroy the entaglement but the trick is to let the computer finish the calculation and THEN measure, this will destroy the entaglement put since you already got the result you don't care (if you want to do something else with the result you have to feed it back into the quantum computer, "turn off" the measurement and then wait for the new calculation to finish).

Question 2: No, you can not transfer information faster than light (FTL), you can "teleport" a quantum state but unfortunately we can not use this for FTL transfer of information; basically because you need to transfer other information in a classical way before you can make sense of the quatum state that was teleported.
 

kevinthenerd

Platinum Member
Jun 27, 2002
2,908
0
76
Originally posted by: AbsolutDealage

True, a transistor can better be compared to a relay... but most people are not aquainted with the operation of a relay, so normally we use the analogy of a switch. And no, a transistor does not need power in order to keep functioning. If the gate is disconnected, it will simply be set to only pass or only block the charge going across its terminals.

Pull the power off of the base, collector, and the emitter of a transistor, and then power it back up. Tell me that it doesn't have a wicked case of amnesia.