WE SUPPORT OUR TROOPS | ONE TEAM, ONE FIGHT
Signed in as:
filler@godaddy.com
WE SUPPORT OUR TROOPS | ONE TEAM, ONE FIGHT
Signed in as:
filler@godaddy.com
Flash memory possesses a finite erase/program cycle capability.
Field deployments indicate that each SLC NAND block rated at 100K PEC can be typically erased and programmed 200,000 to 1,000,000 times (or even more) before the end of life. The newest TLC/QLC NAND generations do not seem to have, typically, such a margin of safety.
NAND wear out is manifested by Flash controller inability to erase, or program, the cell within the allocated time.
SSD life is measured in TBW (Terabytes Written). When vendors give you TBW, ask them how they calculate it. There is room for interpretation in the Write Amplification Factor (WAF).
TBW is calculated using the SSD capacity, NAND program/erase cycles (PEC), and the write-amplification factor (WAF). WAF is a number from 2 to 12, and is largely decided by your workload and how SSD writes from buffer to NAND.
JEDEC does a good job trying to "standardize" workloads for these types of calculations. Look up JEDEC 219 Client (128k Sequential) and JEDEC 218 Enterprise (4k Random) to get a better understanding.
TBW = Capacity (TB) * (PEC / WAF)
Assume:
Sequential write example (recording from camera):
- Sequential writing, assume WAF = 2
TBW = 1*(3000/2) = 1500TBW
Random write example (recording randomly from multiple sensors, or using enterprise applications):
If you have more of an enterprise workload with smaller random writes, WAF can be 12. This would drastically change TBW on the same SSD.
TBW = 1*(3000/12) = 250TBW
You can see how important selecting the right capacity and NAND Flash is based on your workload and application.
Wear leveling uses blocks within the boundaries of one wear leveling zone. Some of those blocks may contain so called "static" data. The "static" indicates rarely modified data. Examples may include OS or user files.
The dynamic wear leveling excludes the blocks with the "static" data from the wear leveling. Consider a hypothetical 4000 wear leveling zone where 3500 blocks contain "static" data and the remaining 500 blocks are part of the wear leveling pool. The dynamic wear leveling would spread the writes among the 500 blocks only. The drive could fail prematurely because wear leveling was unable to spread the use among the blocks containing the "static" data.
When "static" data is modified however, wear leveling moves the entire block content to a new location and the block will be placed in the wear leveling block pool.
The dynamic wear leveling could be compared to a tire maintenance process that uses tire rotation and spare tires. The tires installed on a car are an equivalent of blocks in the wear leveling pool. The spare tires are an equivalent of blocks with the "static" data. Dynamic wear leveling action is like effecting a tire rotation. This evens out the wear of tires installed on the car.
Writing to a block with the "static" data is like replacing the tire installed with the spare tire. This helps to even out the wear among spare and installed tires.
Bottom line for the dynamic wear leveling is that if drive content changes from time, all blocks will experience similar usage during SSD life time.
Some applications however, such as those that use file system, may push to the limit the dynamic wear leveling capability. For example, the drive area storing FAT and metadata may experience many more erases/writes than other areas of the wear leveling zone and/or disk.
The static wear leveling would help to address this challenge. It ensures that all blocks within the wear leveling zone, regardless if they contain "static" data or not, are subject to same level of usage. The static wear leveling would move the "static" data from one location to other, transparently to the host depending exclusively on block usage criteria.
While static wear leveling benefits MLC NAND based storage, virtually all industrial grade flash products use today dynamic wear leveling. When combined with the SLC NAND, it provides a very good SSD life expectancy for most high end applications.
Sequential write across the entire drive makes wear leveling irrelevant. Every memory section experience the same level of usage. The sequential writing acts like a perfect wear leveling maximizing life expectancy calculation. It should not be a surprise that SSD manufacturers typically calculate life expectancy, expressed in years of operations, based on this model.
Consider a 64GB drive that is written to at 25MB/s rate. It will take about 40 min to overwrite the drive. Other word, each block will be written every 40 min. Assuming 100,000 write endurance limit and 24/7/365 operations, the drive would reach end of life in about 8 years.
Conversely, SSD manufactures would not be able to claim higher number of erase/write cycles than 100,000 guaranteed by the SLC NAND vendors, as the application uses equally all the blocks and the wear leveling does not have anything to level.
For many years Flash storage manufacturers tried to convince their customers that write endurance is a problem of the past. Effectively, wear-leveling combined with EDC /ECC techniques made Flash based storage devices bullet proof for consumer, and most industrial and defense storage products.
These devices have operated at relatively low transfer rates. The majority of applications were read intensive. In addition, the industrial and defense customers typically controlled system design and worked with Flash SSD manufacturers to ensure that write endurance is not a limiting factor.
These applications did not push Flash write endurance to its limits. It is quite comforting to realize that SLC NANAD based Flash SSD when overwritten once per day, would reach write endurance limit in about 250 years.
Today, Flash SSD's achieved sufficient capacity and performance to be deployed in main stream notebook, server, data recorders and similar performance intensive applications. The Flash memory transformation from niche to core storage technology is in the making.
These new applications will define the new limits for Flash SSD's.
Today's Flash SSDs support sustained writing speed of up to 100 plus MB/s. A 64GB drive would be overwritten 135 times per day. The drive will reach its write endurance within 2 years of such operations.
The MS Windows uses various log files. 4kB of data is written to the log file every second. The majority of high end Flash SSD's use dynamic wear leveling. With no additional steps, the write endurance limits could be reached within months of operations in such system.
The flash memory has seriously challenged the HDD supremacy as primary computer storage in virtually all applications and the technology already proved itself in most consumer or enterprise deployments where product lifespan is rather short. Memkor believes however that in the Military and Industrial applications such as data recorders and fast continues storage, industrial temperatures do not allow substantiating the theoretical so far "no problem" claim.
The Flash SSD industry has to continue openly discussing and monitoring the Flash SSD write endurance performance before it disappears from the specifications.
It is also our obligation to ensure that factors influencing write endurance are known.
Flash Self Monitoring, Analysis and Reporting Technology (SMART) can be used to monitor Flash SSD NAND usage status. Monitoring the number of available spare blocks and the most stressed portion of memory provides an excellent view on the NAND health. The depletion of spare block pool may indicate that NAND is approaching end of life and needs to be replaced.
The SMART technology is available on all Memkor SATA and PATA products including Industrial Compact Flash drives.
The 3.5" form factor PATA Flash SSD's are also equipped with an LED indicators indicating Flash health.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.