DATA Reading/Writing Increasing/Reducing?

Status
Not open for further replies.

chewy0914

Member
Oct 6, 2014
26
0
16
Below is something I was musing on.
I would like some feedback if anyone has some.

DATA stored in certain patterns on Medium to follow add in on read.

Shared data between sets
0-0011-,1-1100,2-0110-,0011

Dvd/Blueray/Other sends
set1= 00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set2=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set3=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set4=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00

Set 1 has any 4+/-(binary/other00,11+/-) predefined per frame/movie/portion of movie/other added in/combined/other.

So frame one has data added at 00,00,(0)adding in 00,11) ,11,11,11,11...........

Data added in that's held on system/told by Dvd/Bluray/HDD/Other what pattern/other to follow. Reducing data that has to be stored on Dvd/Bluray/Usb/Flash/HDD/Other---
Also used in writing to HDD/Other.

More
Data is stored/set to be used in certain patterns knowing data will be added into sets 8+/-sets ---being 2-8+/-bits more per add in.(data stored in certain patterns on medium/other)

Set 1,receives 00,11,11,11,00,11,11,
set 2 receives,10,00,11,00,11,00,00,00,
set 3 11,00,11.... and so on.

Sets staying the same after one run, changing after each pass through in a certain pattern of change(s).

Being set 10 or again one 1 being 00,11,00,11,00,11,00 instead of 00,11,11,11,00,11,11--(original) on pass 2.
 
Last edited:

serpretetsky

Senior member
Jan 7, 2012
642
26
101
I'm having a lot of trouble trying to understand your post. What do you want achieved?
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
THe data is often compressed so it is not at the laser level read but at the uncompression I believe but I could be wrong.
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
Furthermore you can look how the data is written if you get a cheap microscope and examine a cd that has been burned with some data.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Are you trying to reinvent run-length encoding and xoring?
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
Below is something I was musing on.
I would like some feedback if anyone has some.

DATA stored in certain patterns on Medium to follow add in on read.

Shared data between sets
0-0011-,1-1100,2-0110-,0011

Dvd/Blueray/Other sends
set1= 00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set2=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set3=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
set4=00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00

Set 1 has any 4+/-(binary/other00,11+/-) predefined per frame/movie/portion of movie/other added in/combined/other.

So frame one has data added at 00,00,(0)adding in 00,11) ,11,11,11,11...........

Data added in that's held on system/told by Dvd/Bluray/HDD/Other what pattern/other to follow. Reducing data that has to be stored on Dvd/Bluray/Usb/Flash/HDD/Other---
Also used in writing to HDD/Other.

More
Data is stored/set to be used in certain patterns knowing data will be added into sets 8+/-sets ---being 2-8+/-bits more per add in.(data stored in certain patterns on medium/other)

Set 1,receives 00,11,11,11,00,11,11,
set 2 receives,10,00,11,00,11,00,00,00,
set 3 11,00,11.... and so on.

Sets staying the same after one run, changing after each pass through in a certain pattern of change(s).

Being set 10 or again one 1 being 00,11,00,11,00,11,00 instead of 00,11,11,11,00,11,11--(original) on pass 2.


Hard drives only use half of its size for user data.
When you buy a 500 gig hard drive then in reality it is a 1 terabyte drive.
The reason why you do not see the other half is because the other half is used as slack space for faster writes and tiny cache areas to help the overall speed.

But yes data placement can have an impact on read and write speeds and in backroom nerds tech tests has proven without a doubt that fat32 disk structure is really faster than NTFS.

There is an article on the above already so do not take my word for it go find it via google. Also usage between people are different. How one uses a pc is vastly different than someone else. So knowing how to structure data for the event is what needs to happen and not just a general average data placement structure as over time will it will apparent that such and such disk structure will not work under such and such scenario.

So you have to know the environment in which the drive will operate then you will know how to compute the proper algorithm for data placement.

Now because you mention DVD and CD's then the data is static and will never change so thus you truly can game the data to be the quickest as possible.

You have to be aware that not all DVD drives can read all DVD discs and the same for CD's Do not expect the drive that is made today to read a cd disc made in 1992. I have had several that did not want to be read by any newer drive system. But if this is not your argument then let's move on:

Ok so you have a theoretical file system? or you are strictly speaking data placement? Data placed on a DVD/DVD like a hard drive then yes it can be slow.
I have read about people testing analog drives that load and hold more data than a digital burned disc.

Also have read about holographic drive systems that place data into a hologram bit that can hold other bits into various levels into the holography data store but they went bankrupt due to the entire system not ready for consumer and R&D was through the roof.
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Hard drives only use half of its size for user data.
When you buy a 500 gig hard drive then in reality it is a 1 terabyte drive.
The reason why you do not see the other half is because the other half is used as slack space for faster writes and tiny cache areas to help the overall speed.

Uhm, no. There is quite a bit of "non-user-accessable" raw capacity on HDDs, but they are used for things like servo positioning data, ECC area for user data sectors, and spare sectors.

They are not used for "slack space" and "tiny cache areas".

Edit: Methinks you are confusing SSD technology with HDDs.

SSDs are different, they use unused data sectors as "slack space" (areas for new fresh writes to be written to, also known as "overprovisioning"), and some drives, like the MX200, treat a small portion of NAND as an SLC write cache, which both speeds up writes, but also reduced write amplification. (Some drives with SLC write cache are able to achieve WA < 1.0, even, without compression like SandForce uses.)
 
Last edited:

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
Uhm, no. There is quite a bit of "non-user-accessable" raw capacity on HDDs, but they are used for things like servo positioning data, ECC area for user data sectors, and spare sectors.

They are not used for "slack space" and "tiny cache areas".

Edit: Methinks you are confusing SSD technology with HDDs.

SSDs are different, they use unused data sectors as "slack space" (areas for new fresh writes to be written to, also known as "overprovisioning"), and some drives, like the MX200, treat a small portion of NAND as an SLC write cache, which both speeds up writes, but also reduced write amplification. (Some drives with SLC write cache are able to achieve WA < 1.0, even, without compression like SandForce uses.)


You are right about ECC but it is also used for slack space so that when it has time to write the data back then it will write the data proper but waits until the most urgent data is done first. That is what is meant by slack space.

I only found out about how drives used data when the news article about a man who was using norton ghost and found out that he could reimage his drive past the limit and double his drive for free. He then found out this killed the life of his drive and it was dead within a couple of hours. Norton patched that so people could not do that anymore. The tech could not return the drive because doing so voided the warranty.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
That would be due to a firmware bug in an HDD that only used half a platter (be it inner/outer or top/bottom), not something common to all HDDs. Seeking a track takes a very long time (most of the 10-20ms it usually takes is to stop at the right spot, not to get near the spot, which is not a trivial thing to do, as small as tracks are, with all the vibration the arm has to put up with), so doing anything like that would slow the drive way down, not speed it up. Anything used on the other side of the platter by the firmware will be accessible to vendor tools, and proprietary tools by companies that pay and sign NDAs to get at the real innards of the drive.

That tech should have been able to get either the manufacturer or Symantec to replace the HDD, since what he managed to do should not have been possible, and is a hardware or software maker's fault, not a user fault.
 

inachu

Platinum Member
Aug 22, 2014
2,387
2
41
Now if your data looked like this:

Dvd/Blueray/Other sends
00,00,00,11,11,11,11,11,00,00,00,00,11,11,00,00
-----00,00,00,11,11,11,11,11,00,00,00,00,11,11,00, 00
----------00,00,00,11,11,11,11,11,00,00,00,00,11,11,00, 00
-------------------- 00,00,00,11,11,11,11,11,00,00,00,00,11,11,00, 00

Using the step method can sometimes work which helps between the lag of the read/write head moving.

But data placement can only go so far and that is why todays modern drives have a predictive data load program on the card that tries to predict based on past data loads on what you need. Many times that data is stored in the larger cache built into the drives. I was at MicroCenter and saw a GREEN WD with a 16 (MEG) cache.

Having a larger cache does not guarantee a faster data load times. Nor does it make the drive faster. It just makes the probability of data load times shorter on average.

brown noser larry caught my typo. #sigh
 
Last edited:

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
But data placement can only go so far and that is why todays modern drives have a predictive data load program on the card that tries to predict based on past data loads on what you need. Many times that data is stored in the larger cache built into the drives. I was at MicroCenter and saw a GREEN WD with a 16 gig cache.

Having a larger cache does not guarantee a faster data load times. Nor does it make the drive faster. It just makes the probability of data load times shorter on average.

Sigh. 16 "gig"? I think that you mean 16MB cache. 16GB would be more than the RAM in the PC it's going in, most likely. Even Seagate's desktop Hybrid Drives have only 8GB NAND flash cache memory.
 
Status
Not open for further replies.