Non-RAID drive architectures: Difference between revisions
Tags: Mobile edit Mobile web edit |
|||
(36 intermediate revisions by 25 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Techniques for using multiple disk drives that do not use RAID}} |
|||
⚫ | The most widespread standard for configuring multiple [[hard disk drive]]s is [[RAID]] (Redundant Array of Inexpensive/Independent Disks), which comes in a number of [[Standard RAID levels|standard configurations]] and [[Non-standard RAID levels|non-standard configurations]]. '''Non-RAID drive architectures''' also exist, and are referred to by acronyms with |
||
⚫ | The most widespread standard for configuring multiple [[hard disk drive]]s is [[RAID]] (Redundant Array of Inexpensive/Independent Disks), which comes in a number of [[Standard RAID levels|standard configurations]] and [[Non-standard RAID levels|non-standard configurations]]. '''Non-RAID drive architectures''' also exist, and are referred to by acronyms with [[tongue-in-cheek]] similarity to RAID: |
||
* '''JBOD''' (derived from "'''just a bunch of disks'''"): described multiple hard disk drives operated as individual independent hard disk drives. |
* '''JBOD''' (derived from "'''just a bunch of disks'''"): described multiple hard disk drives operated as individual independent hard disk drives. |
||
* '''SPAN''' or '''BIG''': A method of combining the free space on multiple hard disk drives from "JBoD" to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.<ref name="spanned_volumes_def">{{Cite web |title=Explanation of Spanned Volumes |url=http://www.diskinternals.com/glossary/spanned_volumes.html}}</ref> |
* '''SPAN''' or '''BIG''': A method of combining the free space on multiple hard disk drives from "JBoD" to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.<ref name="spanned_volumes_def">{{Cite web |title=Explanation of Spanned Volumes |url=http://www.diskinternals.com/glossary/spanned_volumes.html}}</ref> |
||
* '''MAID''' (derived from "'''massive array of idle drives'''"): an architecture using hundreds to thousands of hard disk drives for providing [[nearline storage]] of [[data]], primarily designed for "Write Once, Read Occasionally" (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy. |
* '''MAID''' (derived from "'''massive array of idle drives'''"): an architecture using hundreds to thousands of hard disk drives for providing [[nearline storage]] of [[data]], primarily designed for "Write Once, Read Occasionally" (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy. |
||
{{Anchor|LINEAR}} |
|||
== JBOD == |
|||
'''JBOD''' (abbreviated from "''' |
'''JBOD''' (abbreviated from "'''Just a Bunch Of Disks'''"/"'''Just a Bunch Of Drives'''") is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into one or more logical volumes using a volume manager like [[Logical Volume Manager (Linux)|LVM]] or [[mdadm]], or a device-spanning filesystem like [[btrfs]]; such volumes are usually called "spanned" or "linear | SPAN | BIG".<ref>{{Cite web |title=JBOD (just a bunch of disks or just a bunch of drives) |url=http://searchstorage.techtarget.com/definition/JBOD |first=Margaret |last=Rouse |publisher=TechTarget |work=SearchStorage.TechTarget.com |date=September 2005 |access-date=2013-10-31}}</ref><ref>[https://technet.microsoft.com/nl-nl/library/cc771087.aspx Manage spanned volumes]</ref><ref>[https://technet.microsoft.com/en-us/library/cc779579(v=ws.10).aspx Using spanned volumes]</ref> A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume.<ref>{{cite web |
||
| url = http://tldp.org/HOWTO/LVM-HOWTO/mapmode.html |
| url = http://tldp.org/HOWTO/LVM-HOWTO/mapmode.html |
||
| title = LVM HOWTO, Section 3.7. mapping modes (linear/striped) |
| title = LVM HOWTO, Section 3.7. mapping modes (linear/striped) |
||
| |
| access-date = 2013-12-31 |
||
| website = TLDP.org |
| website = TLDP.org |
||
}}</ref><ref>{{cite web |
}}</ref><ref>{{cite web |
||
| url = https://raid.wiki.kernel.org/index.php/RAID_setup#Linear_mode |
| url = https://raid.wiki.kernel.org/index.php/RAID_setup#Linear_mode |
||
| title = Linux RAID setup |
| title = Linux RAID setup |
||
| date = 2013-10-05 | |
| date = 2013-10-05 | access-date = 2013-12-31 |
||
| publisher = Linux Kernel Organization | website = [[Kernel.org]] |
| publisher = Linux Kernel Organization | website = [[Kernel.org]] |
||
}}</ref> |
}}</ref> Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level. |
||
== Concatenation (SPAN, BIG) == |
== Concatenation (SPAN, BIG) == |
||
{{Main article|Spanned volume}} |
|||
{{Disputed section|date=December 2012}} |
{{Disputed section|date=December 2012}} |
||
[[File:JBOD.svg|thumb|200px|Diagram of a SPAN/BIG ("JBOD") setup.]] |
[[File:JBOD.svg|thumb|200px|Diagram of a SPAN/BIG ("JBOD") setup.]] |
||
'''Concatenation''' or ''' |
'''Concatenation''' or '''spanning''' of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely [[Concatenation|concatenated]] together, end to beginning, so they appear to be a single large disk. It may be referred to as '''SPAN''' or '''BIG''' (meaning just the words "span" or "big", not as acronyms).{{Citation needed|date=December 2013}} |
||
In the diagram |
In the adjacent diagram, data are concatenated from the end of disk 0 (block A63) to the beginning of disk 1 (block A64); end of disk 1 (block A91) to the beginning of disk 2 (block A92). If RAID 0 were used, then disk 0 and disk 2 would be truncated to 28 blocks, the size of the smallest disk in the array (disk 1) for a total size of 84 blocks.{{Citation needed|date=December 2013}} |
||
What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity{{Efn|Otherwise, in most cases only the drive portions equaling to the size of the smallest RAID set member would be used.}} and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.<ref name="spanned_volumes_def" /><ref>{{Cite web |title=RAID Requirements |url=http://www.pcguide.com/ref/hdd/perf/raid/conf/driveSelection-c.html}}</ref> |
What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity{{Efn|Otherwise, in most cases only the drive portions equaling to the size of the smallest RAID set member would be used.}} and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.<ref name="spanned_volumes_def" /><ref>{{Cite web |title=RAID Requirements |url=http://www.pcguide.com/ref/hdd/perf/raid/conf/driveSelection-c.html}}</ref> |
||
=== Implementations === |
=== Implementations === |
||
The initial release of Microsoft's [[Windows Home Server]] employs [[Windows Home Server#Drive Extender|drive extender]] technology, whereby an array of independent drives are combined by the OS to form a single pool of available storage. This storage is presented to the user as a single set of network shares. Drive extender technology expands on the normal features of concatenation by providing data redundancy through software – a shared folder can be marked for duplication, which signals to the OS that a copy of the data should be kept on multiple physical drives, whilst the user will only ever see a single instance of their data.<ref>{{cite web |url=http://www.microsoft.com/downloads/details.aspx?FamilyID=40C6C9CC-B85F-45FE-8C5C-F103C894A5E2&displaylang=en |title=Windows Home Server Drive Extender Technical Brief |website=Microsoft.com | |
The initial release of Microsoft's [[Windows Home Server]] employs [[Windows Home Server#Drive Extender|drive extender]] technology, whereby an array of independent drives are combined by the OS to form a single pool of available storage. This storage is presented to the user as a single set of network shares. Drive extender technology expands on the normal features of concatenation by providing data redundancy through software – a shared folder can be marked for duplication, which signals to the OS that a copy of the data should be kept on multiple physical drives, whilst the user will only ever see a single instance of their data.<ref>{{cite web |url=http://www.microsoft.com/downloads/details.aspx?FamilyID=40C6C9CC-B85F-45FE-8C5C-F103C894A5E2&displaylang=en |title=Windows Home Server Drive Extender Technical Brief |website=Microsoft.com |access-date=2009-03-12}}</ref> This feature was removed from Windows Home Server in its subsequent major release.<ref>{{cite web |url=http://windowsteamblog.com/windows/b/windowshomeserver/archive/2010/11/23/windows-home-server-code-name-vail-update.aspx|title=Windows Home Server code name "Vail"– Update}}</ref> |
||
The [[btrfs]] filesystem can span multiple devices of different sizes, including RAID 0/1/10 configurations, storing 1 to 4 redundant copies of both data and metadata.<ref name=":0">{{Cite web|title=Using Btrfs with Multiple Devices - btrfs Wiki|url=https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices|access-date=2021-01-19|website=btrfs.wiki.kernel.org}}</ref> (A flawed RAID 5/6 also exists, but can result in data loss.)<ref name=":0" /> For RAID 1, the devices must have complementary sizes. For example, a filesystem spanning two 500 GB devices and one 1 TB device could provide RAID1 for all data, while a filesystem spanning a 1 TB device and a single 500 GB device could only provide RAID1 for 500 GB of data. |
|||
The [[ZFS]] filesystem can likewise pool multiple devices of different sizes and implement RAID, though it is less flexible, requiring the creation of virtual devices of fixed size on each device before pooling.<ref>{{Cite web|title=Five Years of Btrfs {{!}} MarkMcB|url=https://markmcb.com/2020/01/07/five-years-of-btrfs/|access-date=2021-01-19|website=markmcb.com}}</ref> |
|||
In enterprise environments, enclosures are used to expand a server's data storage by using JBOD<ref>{{Cite web|title=JBOD Data Storage Enclosures|url=https://serverpartdeals.com/collections/jbod-data-storage-enclosures|access-date=2021-04-14|website=ServerPartDeals.com|language=en}}</ref> devices. This is often a convenient way to scale-up storage when needed by daisy-chaining additional disk shelves.<ref>{{Cite web|title=Western Digital Ultrastar Data60 Hybrid Storage Platform SE4U60 Gen3 {{!}} 4U60 60-Bay Data Center JBOD Enclosure {{!}} Up to 1PB|url=https://serverpartdeals.com/products/western-digital-ultrastar-data60-hybrid-storage-platform-gen-3-4u60-bay-jbod-for-data-center-storage-up-to-1080tb-per-storage-enclosure|access-date=2021-04-14|website=ServerPartDeals.com|language=en}}</ref> |
|||
Greyhole, a disk-pooling application, implements what it calls a "storage pool". This pool is created by presenting to the user, through [[Samba (software)|Samba]] shares, a logical drive that is as large as the sum of all physical drives that are part of the pool. Greyhole also provides data redundancy through software - the user can configure, per share, the number of file copies that Greyhole is to maintain. Greyhole will then ensure that for each file in such shares, the correct number of extra copies are created and maintained on multiple physical drives. The user will only ever see one copy of each file.<ref>{{cite web |url=https://github.com/gboudreau/Greyhole |title=gboudreau/Greyhole |publisher=Guillaume Boudreau}}</ref> |
|||
== MAID == |
== MAID == |
||
Line 37: | Line 44: | ||
| url = http://searchstorage.techtarget.com/definition/MAID |
| url = http://searchstorage.techtarget.com/definition/MAID |
||
| title = MAID (massive array of idle disks) |
| title = MAID (massive array of idle disks) |
||
| date = January 2009 | |
| date = January 2009 | access-date = 2013-12-31 |
||
| website = TechTarget.com |
| website = TechTarget.com |
||
}}</ref><ref name="supercomputing-maid">{{cite web |
}}</ref><ref name="supercomputing-maid">{{cite web |
||
| url = http://www.supercomputing.org/sc2002/paperpdfs/pap.pap312.pdf |
| url = http://www.supercomputing.org/sc2002/paperpdfs/pap.pap312.pdf |
||
| title = Massive Arrays of Idle Disks For Storage Archives |
| title = Massive Arrays of Idle Disks For Storage Archives |
||
| date = 2002-07-26 | |
| date = 2002-07-26 | access-date = 2013-12-31 |
||
| author1 = Dennis Colarelli | author2 = Dirk Grunwald |
| author1 = Dennis Colarelli | author2 = Dirk Grunwald |
||
| publisher = University of Colorado |
| publisher = University of Colorado |
||
Line 48: | Line 55: | ||
| url = http://acronyms.thefreedictionary.com/Write+Once,+Read+Occasionally |
| url = http://acronyms.thefreedictionary.com/Write+Once,+Read+Occasionally |
||
| title = What does WORO stand for? |
| title = What does WORO stand for? |
||
| |
| access-date = 2013-12-31 |
||
| website = TheFreeDictionary.com |
| website = TheFreeDictionary.com |
||
}}</ref> |
}}</ref> |
||
Compared to RAID technology, MAID has increased storage density, and decreased cost, electrical power, and cooling requirements. However, these advantages are at the cost of much increased latency, significantly lower [[throughput]], and decreased redundancy. Drives designed for multiple spin-up/down cycles (e.g. [[laptop]] drives) are significantly more expensive.<ref>{{cite web |url=http://www.sgi.com/pdfs/4213.pdf |title=Enterprise MAID Quick Reference Guide |author=SGI |date=2012 | |
Compared to RAID technology, MAID has increased storage density, and decreased cost, electrical power, and cooling requirements. However, these advantages are at the cost of much increased latency, significantly lower [[throughput]], and decreased redundancy. Drives designed for multiple spin-up/down cycles (e.g. [[laptop]] drives) are significantly more expensive.<ref>{{cite web |url=http://www.sgi.com/pdfs/4213.pdf |title=Enterprise MAID Quick Reference Guide |author=SGI |date=2012 |access-date=12 July 2014 |archive-url=https://web.archive.org/web/20140714192709/http://www.sgi.com/pdfs/4213.pdf |archive-date=2014-07-14}}</ref> Latency may be as high as tens of seconds.<ref name="rick_cook">Cook, Rick (2004-07-12). [http://searchstorage.techtarget.com/tip/1,289483,sid5_gci992380,00.html "Backup budgets have it MAID with cheap disk" Retrieved on 2008-07-15]</ref> MAID can supplement or replace [[Tape library|tape libraries]] in [[hierarchical storage management]].<ref name="supercomputing-maid" /> |
||
To allow a more gradual tradeoff between access time and power savings, some MAIDs such as |
To allow a more gradual tradeoff between access time and power savings, some MAIDs such as Nexsan's AutoMAID incorporate drives capable of spinning down to a lower speed.<ref>{{cite web |url=http://www.nexsan.com/library/automaid.aspx |title=AutoMAID Energy Saving Technology |date=2011 |access-date=7 April 2011 |archive-url=https://web.archive.org/web/20140919172344/http://www.nexsan.com/library/automaid.aspx |archive-date=2014-09-19}}</ref> Large scale disk storage systems based on MAID architectures allow dense packaging of drives and are designed to have only 25% of disks spinning at any one time.<ref name="rick_cook" /> |
||
== See also == |
== See also == |
||
* [[File-based replication]] |
|||
* [[Nested RAID levels]] |
* [[Nested RAID levels]] |
||
* [[Non-standard RAID levels]] |
* [[Non-standard RAID levels]] |
||
* [[Standard RAID levels]] |
* [[Standard RAID levels]] |
||
* [[Drobo]] |
|||
== |
== Explanatory notes == |
||
{{Notelist}} |
{{Notelist}} |
||
Latest revision as of 13:11, 4 May 2023
The most widespread standard for configuring multiple hard disk drives is RAID (Redundant Array of Inexpensive/Independent Disks), which comes in a number of standard configurations and non-standard configurations. Non-RAID drive architectures also exist, and are referred to by acronyms with tongue-in-cheek similarity to RAID:
- JBOD (derived from "just a bunch of disks"): described multiple hard disk drives operated as individual independent hard disk drives.
- SPAN or BIG: A method of combining the free space on multiple hard disk drives from "JBoD" to create a spanned volume. Such a concatenation is sometimes also called BIG/SPAN. A SPAN or BIG is generally a spanned volume only, as it often contains mismatched types and sizes of hard disk drives.[1]
- MAID (derived from "massive array of idle drives"): an architecture using hundreds to thousands of hard disk drives for providing nearline storage of data, primarily designed for "Write Once, Read Occasionally" (WORO) applications, in which increased storage density and decreased cost are traded for increased latency and decreased redundancy.
JBOD
[edit]JBOD (abbreviated from "Just a Bunch Of Disks"/"Just a Bunch Of Drives") is an architecture using multiple hard drives exposed as individual devices. Hard drives may be treated independently or may be combined into one or more logical volumes using a volume manager like LVM or mdadm, or a device-spanning filesystem like btrfs; such volumes are usually called "spanned" or "linear | SPAN | BIG".[2][3][4] A spanned volume provides no redundancy, so failure of a single hard drive amounts to failure of the whole logical volume.[5][6] Redundancy for resilience and/or bandwidth improvement may be provided, in software, at a higher level.
Concatenation (SPAN, BIG)
[edit]This section's factual accuracy is disputed. (December 2012) |
Concatenation or spanning of drives is not one of the numbered RAID levels, but it is a popular method for combining multiple physical disk drives into a single logical disk. It provides no data redundancy. Drives are merely concatenated together, end to beginning, so they appear to be a single large disk. It may be referred to as SPAN or BIG (meaning just the words "span" or "big", not as acronyms).[citation needed]
In the adjacent diagram, data are concatenated from the end of disk 0 (block A63) to the beginning of disk 1 (block A64); end of disk 1 (block A91) to the beginning of disk 2 (block A92). If RAID 0 were used, then disk 0 and disk 2 would be truncated to 28 blocks, the size of the smallest disk in the array (disk 1) for a total size of 84 blocks.[citation needed]
What makes a SPAN or BIG different from RAID configurations is the possibility for the selection of drives. While RAID usually requires all drives to be of similar capacity[a] and it is preferred that the same or similar drive models are used for performance reasons, a spanned volume does not have such requirements.[1][7]
Implementations
[edit]The initial release of Microsoft's Windows Home Server employs drive extender technology, whereby an array of independent drives are combined by the OS to form a single pool of available storage. This storage is presented to the user as a single set of network shares. Drive extender technology expands on the normal features of concatenation by providing data redundancy through software – a shared folder can be marked for duplication, which signals to the OS that a copy of the data should be kept on multiple physical drives, whilst the user will only ever see a single instance of their data.[8] This feature was removed from Windows Home Server in its subsequent major release.[9]
The btrfs filesystem can span multiple devices of different sizes, including RAID 0/1/10 configurations, storing 1 to 4 redundant copies of both data and metadata.[10] (A flawed RAID 5/6 also exists, but can result in data loss.)[10] For RAID 1, the devices must have complementary sizes. For example, a filesystem spanning two 500 GB devices and one 1 TB device could provide RAID1 for all data, while a filesystem spanning a 1 TB device and a single 500 GB device could only provide RAID1 for 500 GB of data.
The ZFS filesystem can likewise pool multiple devices of different sizes and implement RAID, though it is less flexible, requiring the creation of virtual devices of fixed size on each device before pooling.[11]
In enterprise environments, enclosures are used to expand a server's data storage by using JBOD[12] devices. This is often a convenient way to scale-up storage when needed by daisy-chaining additional disk shelves.[13]
MAID
[edit]MAID (abbreviated from "massive array of idle drives") is an architecture using hundreds to thousands of hard drives for providing nearline storage of data. MAID is designed for "Write Once, Read Occasionally" (WORO) applications.[14][15][16]
Compared to RAID technology, MAID has increased storage density, and decreased cost, electrical power, and cooling requirements. However, these advantages are at the cost of much increased latency, significantly lower throughput, and decreased redundancy. Drives designed for multiple spin-up/down cycles (e.g. laptop drives) are significantly more expensive.[17] Latency may be as high as tens of seconds.[18] MAID can supplement or replace tape libraries in hierarchical storage management.[15]
To allow a more gradual tradeoff between access time and power savings, some MAIDs such as Nexsan's AutoMAID incorporate drives capable of spinning down to a lower speed.[19] Large scale disk storage systems based on MAID architectures allow dense packaging of drives and are designed to have only 25% of disks spinning at any one time.[18]
See also
[edit]Explanatory notes
[edit]- ^ Otherwise, in most cases only the drive portions equaling to the size of the smallest RAID set member would be used.
References
[edit]- ^ a b "Explanation of Spanned Volumes".
- ^ Rouse, Margaret (September 2005). "JBOD (just a bunch of disks or just a bunch of drives)". SearchStorage.TechTarget.com. TechTarget. Retrieved 2013-10-31.
- ^ Manage spanned volumes
- ^ Using spanned volumes
- ^ "LVM HOWTO, Section 3.7. mapping modes (linear/striped)". TLDP.org. Retrieved 2013-12-31.
- ^ "Linux RAID setup". Kernel.org. Linux Kernel Organization. 2013-10-05. Retrieved 2013-12-31.
- ^ "RAID Requirements".
- ^ "Windows Home Server Drive Extender Technical Brief". Microsoft.com. Retrieved 2009-03-12.
- ^ "Windows Home Server code name "Vail"– Update".
- ^ a b "Using Btrfs with Multiple Devices - btrfs Wiki". btrfs.wiki.kernel.org. Retrieved 2021-01-19.
- ^ "Five Years of Btrfs | MarkMcB". markmcb.com. Retrieved 2021-01-19.
- ^ "JBOD Data Storage Enclosures". ServerPartDeals.com. Retrieved 2021-04-14.
- ^ "Western Digital Ultrastar Data60 Hybrid Storage Platform SE4U60 Gen3 | 4U60 60-Bay Data Center JBOD Enclosure | Up to 1PB". ServerPartDeals.com. Retrieved 2021-04-14.
- ^ "MAID (massive array of idle disks)". TechTarget.com. January 2009. Retrieved 2013-12-31.
- ^ a b Dennis Colarelli; Dirk Grunwald (2002-07-26). "Massive Arrays of Idle Disks For Storage Archives" (PDF). University of Colorado. Retrieved 2013-12-31.
- ^ "What does WORO stand for?". TheFreeDictionary.com. Retrieved 2013-12-31.
- ^ SGI (2012). "Enterprise MAID Quick Reference Guide" (PDF). Archived from the original (PDF) on 2014-07-14. Retrieved 12 July 2014.
- ^ a b Cook, Rick (2004-07-12). "Backup budgets have it MAID with cheap disk" Retrieved on 2008-07-15
- ^ "AutoMAID Energy Saving Technology". 2011. Archived from the original on 2014-09-19. Retrieved 7 April 2011.