Go Back   AnandTech Forums > Hardware and Technology > Memory and Storage

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 09-20-2010, 12:07 AM   #1
ksec
Member
 
Join Date: Mar 2010
Posts: 85
Default SSD - When we hit the laws of diminishing returns

I asked this previously in comments on SSD reviews but there were no answers.

Currently Sandforce based SSD gives us the best performance. A massive jump in Read / Write IOPS and Transfer Rate. As we move to next gen with SATA 3.0 should be able to double those performance numbers.

However, we have simply nailed the major bottleneck in our PC system in less then 3 years time. While from HDD to Sandforce SSD feels like you brought a brand new computer, moving from Sandforce SSD to say a RevoDrive, most of the review concentrate on numbers, but there were no mention of actual perception of speed.

Do we need even faster SSD? Will we see / feels the different? Are we now being bottlenecked by OS which has been programmed with slow moving HDD in mind for the past decades? What applications needs a 1GB + /s Read Write Transfer? Where do we see benefits drop off for consumers in terms of IOPS? Will Response time be more important in the future for SSD then Transfer rate?
ksec is offline   Reply With Quote
Old 09-20-2010, 12:17 AM   #2
beginner99
Platinum Member
 
Join Date: Jun 2009
Posts: 2,196
Default

If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.
Yes SSD are much faster and boot times get pretty short compared to normal hdd's but isn't instant-on what we want in the end?

My pc now probably boots faster than my mobile phone. And it don't have one of these "I'm also a coffee machine" phones.
beginner99 is offline   Reply With Quote
Old 09-20-2010, 12:36 AM   #3
LokutusofBorg
Golden Member
 
LokutusofBorg's Avatar
 
Join Date: Mar 2001
Location: Rocky Mtns, USA
Posts: 1,055
Default

The larger picture of your question(s) has most of its implications in the enterprise space. Will I ever consider a PCIe SSD for my OS in my home computer? Only if they're at a price parity with SATA SSDs. The RevoDrives certainly are in the ballpark, but I don't like the no-TRIM thing. So I guess that's two conditions: price and TRIM.

If you shift to enterprise, then your questions almost aren't worth asking. Plenty of enterprise apps or systems are starved for IOPS and/or throughput that only high-end SANs or expensive (native) PCIe SSDs can deliver. There is a very clear need in that space.
LokutusofBorg is offline   Reply With Quote
Old 09-20-2010, 01:02 AM   #4
Daemas
Member
 
Join Date: Feb 2010
Posts: 165
Default

besides what has been said above me, I would say a major limiting factor (for instant power on) is the motherboard. Specifically the BIOs. Shit takes forever, even with all the extra SATA/USB/Onboard Sound/IDE/Firewire/Parallel/Onboard NICs that I don't use that are turned off, still takes twice as long to run through the BIOs than load up the OS.
Daemas is offline   Reply With Quote
Old 09-20-2010, 02:28 AM   #5
Zap
Super Moderator
Off Topic
Elite Member
 
Zap's Avatar
 
Join Date: Oct 1999
Location: Somewhere Gillbot can't find me
Posts: 22,378
Default

Quote:
Originally Posted by beginner99 View Post
If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.
Yes SSD are much faster and boot times get pretty short compared to normal hdd's but isn't instant-on what we want in the end?
Use S3 sleep. I know that the HTPC crowd has been doing it for a while and notebooks do it, but I only recently fiddled with it on desktops to great success. I've tried it on several systems (socket 1156, 775 and an Atom ITX) and they draw around 1-2W from the wall in sleep according to my Power Angel (like a Kill-A-Watt). Waking it up basically takes as long as your monitor switching modes or turning on.

Quote:
Originally Posted by beginner99 View Post
My pc now probably boots faster than my mobile phone. And it don't have one of these "I'm also a coffee machine" phones.
No kidding. The Blackberry I had for work (and the Palm Treo before it) took ages to boot, literally several minutes. My 6 year old Nokia candy bar phone takes less than 10 seconds from power up to being able to make a call (unless it is still acquiring signal).
__________________
The best way to future-proof is to save money and spend it on future products. (Ken g6)

SSD turns duds into studs. (JBT)
Zap is offline   Reply With Quote
Old 09-20-2010, 03:38 AM   #6
ksec
Member
 
Join Date: Mar 2010
Posts: 85
Default

Ok, let's look at instant on, If we are referring to true instant on from where we left off then SSD isn't the tech to bring it to us. We would need something like MRAM where Memory and Storage Space are united under the same place and it is non volatile. That is still another decades to go.

Apart from BIOS, OS is another thing that needs rework. Intel has developed a Linux with a PCI-E 1GB/s SSD that boots in less then 3 Sec. That is from Cold Boot to UI.

In terms of boot time SSD itsn't the bottleneck. It is more with BIOS , OS loading and scheduling.

Of coz enterprise wants as many IOPS as they could get. But for consumers, We are already not seeing any benefits for getting faster then a Dual Sandforce solution.

Example an App that takes 10s to start with HDD would only take 5s on SSD. But with Doubling the SSD speed we still need 4s for it to start. So clearly there must be some other bottleneck.
ksec is offline   Reply With Quote
Old 09-20-2010, 10:44 AM   #7
razel
Golden Member
 
razel's Avatar
 
Join Date: May 2002
Location: Sunny Los Angeles
Posts: 1,142
Default

Quote:
Originally Posted by beginner99 View Post
If i hit the power switch on my PC and it basically instantly is ready for use (without hibernate) then I'm fully satisfied.
Try standby, without the hybrid hibernate. Instant on, from standby with fans off, has been possible since roughly 2001. At least when I 1st was satisfied with it and have been enjoying instant on ever since.
razel is offline   Reply With Quote
Old 09-20-2010, 11:40 AM   #8
beginner99
Platinum Member
 
Join Date: Jun 2009
Posts: 2,196
Default

Quote:
Originally Posted by Zap View Post
Use S3 sleep. I know that the HTPC crowd has been doing it for a while and notebooks do it, but I only recently fiddled with it on desktops to great success. I've tried it on several systems (socket 1156, 775 and an Atom ITX) and they draw around 1-2W from the wall in sleep according to my Power Angel (like a Kill-A-Watt). Waking it up basically takes as long as your monitor switching modes or turning on.



No kidding. The Blackberry I had for work (and the Palm Treo before it) took ages to boot, literally several minutes. My 6 year old Nokia candy bar phone takes less than 10 seconds from power up to being able to make a call (unless it is still acquiring signal).
Well maybe have to google for it but with ssd it's not really an issue especially if fiddling is needed.
Yeah old mobiles boot much faster and it's enough for me. Don't need to surf, mail do other stuff with it. I'm next to a full blown pc most of the time anyway (work, home). Luckly don't need to travel alot.
beginner99 is offline   Reply With Quote
Old 09-20-2010, 02:34 PM   #9
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,549
Default

I don't think the boot time is a big deal... most is still the mobo (bios bootup)...

the great thing about SSD is faster program installs (windows installs fast, windows updates install fast, programs install fast, games install fast...) and game level load times are blazing fast.

Games and programs will continue to increase in size, so continually increasing speed is beneficial.
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 09-20-2010, 04:14 PM   #10
FalseChristian
Diamond Member
 
FalseChristian's Avatar
 
Join Date: Jan 2002
Location: Oshawa, ON, CA
Posts: 3,305
Default

I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
__________________
Core i5 2500K @ 4.5GHz (45x100) 1.350v-Asus P8Z68-V/Gen3 (BIOS 3402)-16GB Kingston VR DDR3-1333Mhz @ 1600MHz 1.65v-2 EVGA GTX 760 2GB (1212/7600) ACX-750w Cooler Master GXII-RealTek on-board sound-Intel Onboard 1Gb Ethernet-Rogers Cable Internet 6.6MB/second-2TB Seagate 7200 SATA 6 HD-3TB SeaGate USB 3.0 EHD-22" Samsung SyncMaster 2253BW 1680x1050 67Htz-Windows 7 HP 64-bit SP1
FalseChristian is offline   Reply With Quote
Old 09-20-2010, 04:35 PM   #11
coolVariable
Diamond Member
 
coolVariable's Avatar
 
Join Date: May 2001
Posts: 3,722
Default

Quote:
Originally Posted by FalseChristian View Post
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
So wrong.
Have you ever seen a SSD in action?
__________________
"You only get rich by the money you don't spend."
cool°
Variable

AnandTech! How about an article about the issues with AMD AHCI driver performance?
coolVariable is offline   Reply With Quote
Old 09-20-2010, 04:46 PM   #12
jimhsu
Senior Member
 
Join Date: Mar 2009
Posts: 702
Default

Quote:
Originally Posted by FalseChristian View Post
I don't know why everyone is going gaga over hard drives. They're storage capacity is much to small to make them useful (compared to tape drives and floppies). When we start seeing 10MB HDDs that don't cost an arm and a leg then they will become mainstream. Until then several 1MB tapes or floppies are the only way to go at the moment.
Fixed.
jimhsu is offline   Reply With Quote
Old 09-20-2010, 05:38 PM   #13
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,549
Default

Quote:
Originally Posted by FalseChristian View Post
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
For the love of science it is "THEIR"

Their = Something that belongs to them. Their capacity, their house, their anger, their spelling mistakes.
They're = They are. They're wrong, They're the all state champions, They're incapable of telling the difference between their and they're.

Now, as for your argument... You have obviously never used an SSD... My 80GB intel G2 was the best upgrade I have ever gotten... Boot time speedup was small and irrelevant. Windows installs in 1/3rd the time, Windows updates install in a fraction of the time, programs install in a fraction of the time, games install in a fraction of the time, games load levels in a tiny fraction of the time...
it has actually made games with painfully long loading screens fun again (ex: Neverwinter nights 2)

They don't need to be mass storage devices. My gaming machine has an 80GB SSD + 640GB Spindle HDD. And I have a NAS (via gigE with jumbo frames) running raidz2 (raid6 zfs) on 5x750GB spindle drives...

SSDs are already mainstream, as a complementary system/games drive that is used alongside spindle drives (storage)... when you get large SSDs cheaply is when spindle drives will finally be KILLED by SSDs and nobody would be producing them anymore (since they would have become completely obsolete).
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 09-20-2010, 06:02 PM   #14
frostedflakes
Diamond Member
 
frostedflakes's Avatar
 
Join Date: Mar 2005
Posts: 7,906
Default

Quote:
Originally Posted by FalseChristian View Post
I don't know why everyone is going gaga over SSD. They're storage capacity is much to small to make them useful. When we start seeing 1TB SSD that don't cost an arm and a leg then they will become mainstream. Until then 1-2TB regular hard-drives are the only way to go at the moment.
Using an SSD for mass storage is just moronic. All it needs to be large enough to hold is your OS, programs, and games. For most people, 60-120GB is probably enough unless you have a ton of games installed. I can get by pretty well with a 60GB SSD, although 120/128GB would have been ideal if it was within my budget. Large files (music, movies, backups, etc.) are stored on a 5400RPM drive, because they don't benefit any from the higher IOPS provided by 7200RPM drives and SSDs. SSDs are good for speed and HDDs for space. Smart people use both and have the best of both worlds.

Last edited by frostedflakes; 09-20-2010 at 06:05 PM.
frostedflakes is offline   Reply With Quote
Old 09-20-2010, 06:12 PM   #15
Eeqmcsq
Senior Member
 
Join Date: Jan 2009
Posts: 391
Default

It never hurts to push technology forward and have more speed. There's always going to be somebody who needs it, usually a heavy-use workstation or server. But for the average consumer, I do think that any SSD nowadays is "quick" enough for their daily usage. So all that means is that those avg consumers who won't see a perceptible difference in their daily activities can stick to the "value" segment and look for the $$$/capacity drives.

Personally, I think the next bottleneck to the PC is Internet speeds. I'd like to see that get pushed up, so I can enjoy higher quality live sports streams so I don't have to guess what the player's trying to do when watching on a low quality, low resolution stream.
Eeqmcsq is offline   Reply With Quote
Old 09-20-2010, 11:10 PM   #16
ksec
Member
 
Join Date: Mar 2010
Posts: 85
Default

Quote:
Games and programs will continue to increase in size, so continually increasing speed is beneficial.
Arh, something i didn't considered before. Although most of todays Programs are large because they have some very pretty graphics included, Multi Languages Interface and Help files. Otherwise we are actually seeing the trend be reversed as people demand less bloated, fast and efficient programs. ( utorrent as example ).

Internet speed is not a problem on my side of the world where i can get 1000M for fairly cheap. With Local download speed exceed my HDD writing speed ( :sigh )

So we are back to software and may be CPU as the possible limitation. I remember Intel said AntiVirus were previously bounded by HDD speed is now bound by CPU speed on a SSD.

Last edited by ksec; 09-20-2010 at 11:12 PM.
ksec is offline   Reply With Quote
Old 09-21-2010, 12:42 AM   #17
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,549
Default

Quote:
Originally Posted by ksec View Post
Arh, something i didn't considered before. Although most of todays Programs are large because they have some very pretty graphics included, Multi Languages Interface and Help files. Otherwise we are actually seeing the trend be reversed as people demand less bloated, fast and efficient programs. ( utorrent as example ).
While its true that people look for faster, less bloated software, they look for feature-set as well (typically with a higher priority).

There are also a variety of ways in which utilizing extra storage can increase performance overall. Compression technology requires more CPU (and time) to decompress, if space is cheap enough you can have uncompressed graphics and audio for higher quality AND lower CPU/RAM usage.
Another example is http://en.wikipedia.org/wiki/Rainbow_table
There are other uses for space that can increase performance.

I do love my uTorrent, it is highly optimized to be both faster and smaller. This is due in large part to the goals (and skill) of its creators.
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 09-21-2010, 08:54 AM   #18
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

SSDs are expensive if you compare with HDD. But it would make more sense to compare with the pricing of your RAM, then we see SSDs are roughly 6 times less expensive.

The real question is when NAND or other SSD technology gets its native interface to the chipset/DRAM or CPU directly. The Serial ATA interface is not really suitable for the parallel architecture of NAND. Using NAND-DIMMS much like DRAM memory might be possible in the future.

So a future image of memory in your system could be:

L1 Cache: 100GB/s
L2 Cache: 50GB/s
L3 Cache: 40GB/s
RAM: 10GB/s
SSD : 1GB/s
HDD : 100MB/s

Note that i used throughput; while latency would have been more appropriate. The HDD can do 100MB/s with sequential transfers, but under 1MB/s for strong random I/O.

The SSD has a very high capacity compared to its performance level. Imagine you buying 60GB RAM for just over 100 dollars.

Advanced systems like ZFS effectively use SSDs are memory when configured as L2ARC device or 'cache device'. With a native interface their speeds could exceed that of DRAM i believe.
sub.mesa is offline   Reply With Quote
Old 09-21-2010, 10:37 AM   #19
ksec
Member
 
Join Date: Mar 2010
Posts: 85
Default

Quote:
RAM: 10GB/s
Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
And most system nowadays are equipped with Dual Channel Memory.

And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?
ksec is offline   Reply With Quote
Old 09-21-2010, 11:03 AM   #20
sub.mesa
Senior Member
 
Join Date: Feb 2010
Posts: 611
Default

Quote:
Originally Posted by ksec View Post
Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
Well i didn't try to get numbers correct; but to indicate the scale of several system components. Not many SSDs do 1GB/s also; only those with PCIe interface.

The memory speeds of average systems are about 10GB/s i think. Modern Core i7 cpus start at 12 and end at 24 according to this review; so i think taking 10GB/s as average for memory throughput is not that bad. You would get a totally different picture if you compare latencies, though.

Quote:
And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?
Because it adds a lot of complexity and was totally built around a serial device like HDDs, which can only do one thing at a time on their physical medium. Though Serial ATA/300 added mandatory NCQ or Native Command Queueing support, which allows sending up to 32 I/O requests to a HDD or SSD, this was invented to allow HDDs less seek times by rearranging the I/O requests to a 'quicker path'. But it was not invented with the idea of true parallel storage devices like SSDs.

Thus, Serial ATA was great for low-performance storage devices but adds complexity and thus overhead on high-performance storage devices like SSDs. This becomes more apparent when you issue many small packages.

Also filesystems would benefit greatly if they handled NAND devices differently from mechanical HDDs. Filesystems often have optimizations that make sense on HDDs but much less so on SSDs. Also an intelligent filesystem would be able to write to different blocks so to give the SSD an easier job and it not degrading much in performance if any; so called copy-on-write filesystems.

Perhaps we'll see motherboards which have slots where you can insert NAND chips, with the NAND controller in the chipset. This is not going to happen anytime soon though; and likely NAND will be replaced by another technology in the future. But that does not change the fact that solid state memory storage is starting to replace mechanical storage and we need to rewrite some parts of both software and hardware that were specifically tuned towards handling mechanical storage, which is now hurting performance on solid state storage devices.
sub.mesa is offline   Reply With Quote
Old 09-21-2010, 06:39 PM   #21
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,549
Default

Quote:
Originally Posted by ksec View Post
Single Channel DDR3 1333 already exceed 10GB/s, not to mention 1600 and 1800.
And most system nowadays are equipped with Dual Channel Memory.

And why is a Serial Interface no good for parallel architecture? PCE-E is a serial interface as well. Most of today's Interface are moving to serial architecture, why isn't it good for SSD?
eh, they seem to me to be constantly moving back and forth between parallel and serial... every time they make a revolutionary enough advancement that a single lane serial connector can replace the previous parallel, then later on a parallel connector based on the same tech is implemented. But sometimes not.

the intel lightpeak for example is parallel (4 seperate laser wavelengths multiplexed), I have read some work on making a parallel implementation of SATA, etc.
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 09-21-2010, 07:48 PM   #22
Voo
Golden Member
 
Join Date: Feb 2009
Posts: 1,684
Default

Quote:
Originally Posted by taltamir View Post
the intel lightpeak for example is parallel (4 seperate laser wavelengths multiplexed), I have read some work on making a parallel implementation of SATA, etc.
I was sure the 10Gbps numbers were achieved without WDM? But anyways, WDM is not quite the same as parallel channels as in PATA for example.

Actually the biggest problem with parallel channels is that it's adding lots of complexity at the analog level which gets only worse with higher speeds.
Voo is offline   Reply With Quote
Old 09-21-2010, 11:20 PM   #23
LokutusofBorg
Golden Member
 
LokutusofBorg's Avatar
 
Join Date: Mar 2001
Location: Rocky Mtns, USA
Posts: 1,055
Default

Quote:
Originally Posted by sub.mesa View Post
The real question is when NAND or other SSD technology gets its native interface to the chipset/DRAM or CPU directly. The Serial ATA interface is not really suitable for the parallel architecture of NAND. Using NAND-DIMMS much like DRAM memory might be possible in the future.

So a future image of memory in your system could be:

L1 Cache: 100GB/s
L2 Cache: 50GB/s
L3 Cache: 40GB/s
RAM: 10GB/s
SSD : 1GB/s
HDD : 100MB/s
FusionIO is already doing this with their SSD cards. Their controller is a native PCIe -> NAND controller. Whereas the OCZ PCIe cards have traditional RAID and SATA controllers. I heard Micron has either a full team or multiple teams of engineers working on a native PCIe solution. The next big push in SSD tech is going to be fun to watch.
LokutusofBorg is offline   Reply With Quote
Old 09-22-2010, 04:03 PM   #24
taltamir
Lifer
 
taltamir's Avatar
 
Join Date: Mar 2004
Posts: 13,549
Default

Quote:
Originally Posted by Voo View Post
Actually the biggest problem with parallel channels is that it's adding lots of complexity at the analog level which gets only worse with higher speeds.
exactly, this is why parallel is only bothered with when it is absolutely necessary... whenever you can get away with a serial design, you use it. This is why the back and forth, we get bandwith hungry so we go parallel, we get some breakthrough or another increasing speed a lot, then we switch back to serial...

BTW, intel is multiplexing 4 lasers for the 10 MB/s.
You are right that it is not exactly parallel since its only one "cable"... its a bit harder to clearly say its one or the other.
__________________
How to protect your data guide
AA Naming Guide

I do not have a superman complex; for I am God, not superman!
The internet is a source of infinite information; the vast majority of which happens to be wrong.
taltamir is offline   Reply With Quote
Old 09-22-2010, 04:56 PM   #25
Voo
Golden Member
 
Join Date: Feb 2009
Posts: 1,684
Default

Quote:
Originally Posted by taltamir View Post
exactly, this is why parallel is only bothered with when it is absolutely necessary... whenever you can get away with a serial design, you use it. This is why the back and forth, we get bandwith hungry so we go parallel, we get some breakthrough or another increasing speed a lot, then we switch back to serial...

BTW, intel is multiplexing 4 lasers for the 10 MB/s.
You are right that it is not exactly parallel since its only one "cable"... its a bit harder to clearly say its one or the other.
Actually they reach the 10gbits with one cable and one diode (and one wavelength) if I read their research site and the AT article correctly. I assume you got the idea of the 4 lasers from one picture on the research site where they state "# Light Peak module with four fibers each capable of carrying 10Gb of data per second." (src)


And you're right in so far that while parallel data transfer with analog data is more or less unreasonable, WDM should work just fine for optical data and in a sense that's parallel data transfer..
Voo is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 02:15 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.