US8341340B2 - Multi-tier address mapping in flash memory - Google Patents
Multi-tier address mapping in flash memory Download PDFInfo
- Publication number
- US8341340B2 US8341340B2 US12/840,938 US84093810A US8341340B2 US 8341340 B2 US8341340 B2 US 8341340B2 US 84093810 A US84093810 A US 84093810A US 8341340 B2 US8341340 B2 US 8341340B2
- Authority
- US
- United States
- Prior art keywords
- mapping
- memory
- tier
- last written
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active, expires
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 246
- 238000000034 method Methods 0.000 claims description 30
- 238000010586 diagram Methods 0.000 description 8
- 239000007787 solid Substances 0.000 description 8
- 238000013459 approach Methods 0.000 description 5
- 238000013500 data storage Methods 0.000 description 5
- 230000005684 electric field Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000014616 translation Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007667 floating Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 210000003813 thumb Anatomy 0.000 description 2
- 241000053227 Themus Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/72—Details relating to flash memory management
- G06F2212/7201—Logical to physical mapping or translation of blocks or pages
Definitions
- Various embodiments of the present invention are generally directed to a method and system for mapping addresses in a solid state non-volatile memory device.
- methods, systems, and/or apparatuses perform grouping of a user data portion of a flash memory arrangement into a plurality of mapping units.
- Each of the mapping units includes a user data memory portion and a metadata portion.
- the mapping units are formed into a plurality of groups that are associated with at least one lower tier of a forward memory map.
- For each of the groups a last written mapping unit within the group is determined.
- the last written mapping unit includes mapping data in the metadata portion that facilitates determining a physical address of other mapping units within the group.
- a top tier of the forward memory map is formed that includes at least physical memory locations of the last written mapping units of each of the groups.
- a physical address of a targeted memory is determined using the top tier and the metadata of the at least one lower tier.
- the physical addresses of the mapping units may be arbitrarily assigned to corresponding logical addresses used by a host to access the user data memory portions of the mapping units.
- determining a physical address of the targeted memory using the top tier and the metadata of the at least one lower tier may involve selecting a first last written memory unit from a selected one of groups based on a logical address of the targeted memory, and determining the physical address of the targeted memory based on the metadata portion of the first last written memory unit.
- the mapping data of the last written mapping units do not include the physical addresses of the respective last written mapping units.
- the at least one lower tier may include a second tier and a third tier, and in such a case the groups are associated with the second tier. Also in such a case, each of the groups may include a plurality of subgroups, and wherein the subgroups are associated with the third tier.
- determining the physical address of the targeted memory using the top tier and the metadata of the at least one lower tier may involve: a) selecting a first last written memory unit from a selected one of groups based on a logical address of the targeted memory; b) determining a second last written memory unit of a selected one of the subgroups based on the metadata portion of the first last written memory unit; and c) determining the physical address of the targeted memory based on the metadata portion of the second last written memory unit.
- methods, systems, and/or apparatuses perform receiving, at a flash memory device, an access request for user data based a logical memory address. From a top tier of a forward map based on the logical memory address, a physical address of a last written mapping unit of a lower tier group of the forward map is determined.
- the lower tier group includes a plurality of mapping units, including the last written mapping unit, and the mapping units each include a user data memory portion and a metadata portion. From lower tier mapping data within the metadata portion of the last written mapping unit, a second physical address of a mapping unit of the lower tier group is determined, and access to the user data is facilitated based on the second physical address.
- the physical addresses of the mapping units may be arbitrarily assigned to the corresponding logical addresses.
- the access request may include a write request.
- a new mapping unit of the lower tier group is selected for receiving user data of the write request, the lower tier mapping data is updated based on a physical address of the new mapping unit, the user data and the mapping data are written to the respective user data memory portion and metadata portion of the new mapping unit, and the top tier is updated with a physical address of the new mapping unit.
- the determining of the second physical address from the lower tier mapping data of the last written mapping unit only occurs if the last written mapping unit does not correspond to the logical memory address, and if the last written mapping unit corresponds to the logical memory address, providing physical address of the last written mapping unit as the second physical address to facilitate access to the user data.
- the lower tier mapping data includes second tier mapping data.
- facilitating access to the user data based on the second physical address involves a) determining, from the second tier mapping data, a physical address of a second last written mapping unit of a third tier group of the forward map, wherein the third tier group comprises a subset of the lower tier group; b) determining, from third tier mapping data within the metadata portion of the second last written mapping unit, a third physical address of a mapping unit of the third tier group; and c) facilitating access to the user data based on the third physical address.
- FIG. 1 is a block diagram of a storage apparatus according to an example embodiment of the invention.
- FIG. 2 is a block diagram illustrating a two-tier memory mapping according to an example embodiment of the invention
- FIG. 3 is a block diagram illustrating a three-tier memory mapping according to an example embodiment of the invention.
- FIG. 4A is a graph illustrating a generalized multi-tier mapping data organization according to an example embodiment of the invention.
- FIG. 4B is a flowchart illustrating a generalized multi-tier mapping procedure according to an example embodiment of the invention.
- FIG. 5 is a block diagram illustrating an example of mapping data being written to a second tier map according to an example embodiment of the invention
- FIG. 6 is a flowchart illustrating reading data using a multi-tier map according to an example embodiment of the invention.
- FIG. 7 is a flowchart illustrating writing data using a multi-tier map according to an example embodiment of the invention.
- FIG. 8 is a flowchart illustrating a garbage collection procedure data using a multi-tier map according to an example embodiment of the invention.
- the present disclosure relates to mapping of memory units in a solid state memory device for use by a host device.
- the mapping may be used for purposes of finding physical addresses of data based on a logical address used by a host device to access the data.
- the memory device may generally maintain a top-tier mapping that includes references to lower tier mapping units.
- the lower tier mapping units may include additional mapping data that is stored together with user data of a user data portion of non-volatile memory. This additional mapping data facilitates locating the targeted memory units, and may be arranged as multiple tiers within the non-volatile user memory.
- Non-volatile memory generally refers to a data storage that that retains data upon loss of power.
- Non-volatile data storage devices come in a variety of forms and serve a variety of purposes. These devices may be broken down into two general categories: solid state and non-solid state storage devices.
- Non-solid state data storage devices include devices with moving parts, such as hard disk drives, optical drives and disks, floppy disks, and tape drives. These storage devices may move one or more media surfaces and/or an associated data head relative to one another in order to read a stream of bits.
- the following discussion is directed to solid-state, non-volatile memory embodiments. These embodiments are provided for purposes of illustration and not of limitation, and concepts may be applicable to other types of data storage that has similar characteristics to solid-state, non-volatile memory devices.
- Solid-state storage devices differ from non-solid state devices in that they typically have no moving parts.
- Solid-state storage devices may be used for primary storage of data for a computing device, such as an embedded device, mobile device, personal computer, workstation computer, or server computer.
- Solid-state drives may also be put to other uses, such as removable storage (e.g., thumb drives) and for storing a basic input/output system (BIOS) that prepares a computer for booting an operating system.
- BIOS basic input/output system
- Flash memory is one example of a solid-state storage media.
- Flash memory e.g., NAND or NOR flash memory, generally includes cells similar to a metal-oxide semiconductor (MOS) field-effect transistor (FET), e.g., having a gate (control gate), a drain, and a source.
- MOS metal-oxide semiconductor
- FET field-effect transistor
- the cell includes a “floating gate.” When a voltage is applied between the gate and the source, the voltage difference between the gate and the source creates an electric field, thereby allowing electrons to flow between the drain and the source in the conductive channel created by the electric field. When strong enough, the electric field may force electrons flowing in the channel onto the floating gate.
- Solid state memory may be distinguished from magnetic media in how data is rewritten.
- a magnetic media such as a disk drive
- each unit of data e.g., byte, word
- flash memory cells must first be erased by applying a relatively high voltage to the cells before being written, or “programmed.” For a number of reasons, these erasures are often performed on blocks of data (also referred to herein as “erase units”) that are larger than the data storage units (e.g., pages) that may be individually read or programmed.
- a flash drive controller may need to frequently change the mapping between physical and logical addresses. Such mapping may be used to facilitate quick access to arbitrary blocks of data, as well as ensuring the data can be recovered in case of power loss.
- An apparatus, method, and computer-readable medium according to embodiments of the invention facilitate achieving these and other goals in a solid-state storage device using a multiple-tiered mapping of addresses.
- FIG. 1 a block diagram illustrates an apparatus 100 which may incorporate concepts of the present invention.
- the apparatus 100 may include any manner of persistent storage device, including a solid-state drive (SSD), thumb drive, memory card, embedded device storage, etc.
- a host interface 102 may facilitate communications between the apparatus 100 and other devices, e.g., a computer.
- the apparatus 100 may be configured as an SSD, in which case the interface 102 may be compatible with standard hard drive data interfaces, such as Serial Advanced Technology Attachment (SATA), Small Computer System Interface (SCSI), Integrated Device Electronics (IDE), etc.
- SSD solid-state drive
- SCSI Small Computer System Interface
- IDE Integrated Device Electronics
- the apparatus 100 includes one or more controllers 104 , which may include general- or special-purpose processors that perform operations of the apparatus.
- the controller 104 may include any combination of microprocessors, digital signal processor (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or other equivalent integrated or discrete logic circuitry suitable for performing the various functions described herein.
- DSPs digital signal processor
- ASICs application specific integrated circuits
- FPGAs field programmable gate arrays
- the controller 104 may use volatile random-access memory (RAM) 108 during operations.
- the RAM 108 may be used, among other things, to cache data read from or written to non-volatile memory 110 , map logical to physical addresses, and store other operational data used by the controller 104 and other components of the apparatus 100 .
- the non-volatile memory 110 includes the circuitry used to persistently store both user data and other data managed internally by apparatus 100 .
- the non-volatile memory 110 may include one or more flash dies 112 , which individually contain a portion of the total storage capacity of the apparatus 100 .
- the dies 112 may be stacked to lower costs.
- the memory contained within individual dies 112 may be further partitioned into blocks, here annotated as erasure blocks/units 114 .
- the erasure blocks 114 represent the smallest individually erasable portions of memory 110 .
- the erasure blocks 114 in turn include a number of pages 116 that represent the smallest portion of data that can be individually programmed or read.
- the page sizes may range from 512 bytes to 4 kilobytes (KB), and the erasure block sizes may range from 16 KB to 512 KB. It will be appreciated that the present invention is independent of any particular size of the pages 116 and blocks 114 , and the concepts described herein may be equally applicable to smaller or larger data unit sizes.
- the controller 104 may also maintain mappings of logical block addresses (LBAs) to physical addresses in the volatile RAM 108 , as these mappings may, in some cases, may be subject to frequent changes based on a current level of write activity.
- the mapping module 106 uses a multi-level tiered mapping of addresses for performing forward address translations between the host interface 102 and non-volatile memory 110 .
- the term “forward mapping” or “forward address translation” generally refers to the determining of one or more physical address of the non-volatile memory 110 based on one or more logical addresses, e.g., used for access via host interface 102 . This is in contrast to reverse mapping, which involves determining one or more logical addresses based on a physical address. While some of concepts discussed herein may be applicable to both forward and reverse mapping, the discussion primarily focuses on forward mapping.
- a first (or top-level) tier 120 of information is directly maintained and/or accessed by the mapping module 106 .
- This top-level tier may be maintained in any combination of RAM 108 and a system portion (not shown) of non-volatile memory 110 .
- the non-volatile memory 100 may include a portion reserved for system memory (e.g., non-user memory).
- the system memory portion may include solid state memory (e.g., SLC flash memory) having an estimated life that is orders of magnitude higher than other portions (e.g., MLC flash memory) of the non-volatile memory 110 .
- the mapping module 106 maintains a forward map that addresses only the last memory unit written for each group of N memory units
- the last memory unit to be written contains pointers to (e.g., physical addresses of) all other memory units in the group. If the memory being targeted for a write or read is not the last in the group to be written (e.g., the one the forward map points to) then an additional flash read is used to get the physical address of the targeted memory.
- the forward map as described above may include the top tier 120 .
- Each address of the top-level tier 120 may be used to access a particular group of addresses of one or more lower or sub-tiers 122 .
- the lower tier data 122 is part of a user portion of the non-volatile memory 110 . Additional mapping information is stored with the user data in these lower tiers 122 , and this additional data allows particular pages 116 (or other individually addressable areas of memory) within the group to be addressed. By using the user memory to store the mapping information of the lower tiers 122 , the mapping information may be continually kept up to date.
- the top tier 120 may include references to what are referred to herein as “mapping units.”
- the mapping units may include user data, and may be considered memory units/blocks primarily used for storing user data.
- the mapping units may also include data structures and/or physical data blocks that contain the mapping information for all of the other members of the same group of mapping units at a particular tier level.
- the top tier 120 may include a reference (e.g., address) to a plurality of mapping units of a second tier of the lower tiers 122 .
- Each reference of the top tier 120 points to one address of a group of units in the second tier, e.g., an address of a unit within that lower tier group that was last written/programmed. That group of memory units is associated with the particular address, and data contained within the addressed memory unit can be read to locate the final destination unit within the group.
- the top tier may point to an address of a mapping unit within a second-level tier group (e.g., directly below top tier 120 ).
- the second-level tier mapping unit may include references to subgroups within the second-level tier group, e.g., a third-level tier subgroup. These references in the second-level tier point to an address within the third-level tier subgroup, e.g., a mapping unit within the third tier subgroup that was last written.
- This third-level tier mapping unit contains mapping data to used locate the final targeted memory unit within the third-level tier. This can be extended to additional tier levels, if desired.
- mapping schemes embodiments described herein there need be no predetermined correlation between groups of logical addresses and groups of physical addresses, e.g., assignments of physical to logical blocks may be arbitrary. For example, a block/range of logical addresses need not map to a corresponding block/range of physical addresses, and vice versa. While there may be cases where this occurs naturally or by design (e.g., large contiguous files subject to sequential read/writes) the forward and reverse mappings may in general be formed independently of any association between groupings of physical and logical addresses.
- a block diagram illustrates a two-tiered mapping scheme according to an embodiment of the invention.
- a top-level tier 202 of a forward map includes a number of references to mapping units, as represented by pointer data 204 .
- At least one member of the group 206 - 209 includes data that facilitates accessing the others of the group 206 - 209 .
- the pointer 204 references mapping unit 206 , which may have been the last mapping unit of the group 206 - 209 to have data written to it.
- Mapping unit 206 includes data portion 210 where the user data is stored.
- the mapping unit 206 also has mapping data (also referred to herein as metadata), represented here as pointers 212 - 214 .
- the pointers 212 - 214 reference the remaining members 207 - 209 of the group 206 - 209 .
- the remaining data members 207 - 209 also include sections usable for storing mapping data, e.g., section 216 of memory unit 207 .
- Those data regions 216 may have been used for that purpose at the time the respective mapping unit 207 was written to, but the data in region 216 would be considered invalid/stale because mapping unit 206 contains the most recent mapping data. If and when the user data in 207 is updated, then this would generally involve writing the data to a new memory unit (not shown), because it may be inefficient to erase and then rewrite at the current address of unit 207 .
- the mapping data currently stored in 212 - 214 would be updated and written to the mapping data section of the newly written mapping unit.
- pointer 204 is updated to point to the recently written mapping unit. Thereafter, the mapping data in sections 212 - 214 would be considered stale/invalid.
- the size of the top-level tier 202 of the forward map may be significantly reduced, e.g., by a factor of N.
- N the number of tier map
- enough memory may need to be reserved for N copies of the second level map, because each mapping unit needs to reserve space to address the other group members in case that mapping unit is the last written memory unit for that group.
- This may somewhat reduce the amount of user memory available, because each memory unit may need to reserve enough space to address N ⁇ 1 other memory units within the group.
- the size of the forward map is reduced by a factor of 8.
- the size of the forward map may be significantly reduced (by a factor of N for a group size of N). Further, the lower tier mapping data on flash is always current, and so additional flash updates are not required to backup the map since all the lower tier map updates are written along with the data. Because a portion of the map is written with the data, there may be no need to log mapping update in order to periodically update the forward map in flash.
- N*M*P additional mapping dimensions
- a top-level tier 302 of a forward map includes a number of references to mapping units, as represented by pointer data 304 .
- the pointer data 304 references a mapping unit 306 , which is the last mapping unit written in all the lower tier mapping units associated with pointer 304 .
- the second tier (immediately below the top tier 302 ) is represented as N-groups 308 - 311 . Within each of the N-groups 308 - 311 are M-mapping units, such as mapping units 312 - 315 shown in group 311 .
- the referenced mapping unit 306 contains a first portion of metadata, e.g., metadata portions 316 - 318 , that contains only part of the path to the targeted memory address.
- metadata e.g., metadata portions 316 - 318
- pointer 304 accesses the last memory unit written in groups 308 - 311 (which is memory unit 306 )
- one of N ⁇ 1 sub-group pointers 316 - 318 in that memory unit 306 are next referenced, unless the targeted memory is already within group 308 . If the targeted memory is within group 308 , there is no need to read the data in 316 - 318 .
- a third-tier pointer 320 - 322 within memory unit 306 may be then read to address the ultimate target within group 308 . If memory unit 306 is the target, then this second read is also unnecessary.
- mapping unit 312 is the last written unit of the M memory units in group 311 , and so one of pointers 324 - 326 may then be accessed to locate the targeted unit 312 - 315 within the group 311 .
- a new memory unit receives the data, and this new memory unit stores mapping data for both tiers.
- This newly written mapping data includes mapping data for memory units within groups 308 - 311 as well as mapping data for use within group 311 .
- mapping unit 306 may be invalid/stale.
- the lower level mapping data 320 - 322 may still be valid, because mapping unit 306 may still be the last written memory unit within group 308 .
- the lower tier mapping data 122 in FIG. 1 can be considered a K-dimensional map.
- a grouping of mapping units can be defined such that each item in the group can be referenced by a K-dimensional coordinate.
- each group member e.g., memory storage unit
- x 1 , x 2 , . . . x K is identified by a coordinate (x 1 , x 2 , . . . x K ).
- every memory unit in the group has a corresponding physical address.
- single level addressing each unit of data written has the address of each group member embedded with the data.
- multi-level addressing only part of the addresses are included with each unit of data and up to K reads may be required to locate the physical address of any particular group member.
- the mapping units in a multi-level mapping arrangement can be represented as leaves on a tree with each level of branches represented by one of the K coordinates. This is shown by way of example in graph 400 of FIG. 4A .
- the coordinates specify a particular path through the tree.
- the meta-data stored with the data for any particular leaf is determined by the path to that leaf, e.g., at level 404 .
- the first list 402 contains the physical address for the latest memory unit with each value of the first coordinate.
- a second set of lists 406 contains the physical address for the latest memory unit with each value of the second coordinate.
- the kth list 404 contains the physical address for the latest mapping units with each value of the kth coordinate that also shares the first k ⁇ 1 branches.
- FIG. 4B is a flowchart showing an example subroutine according to an embodiment of the invention.
- a request 412 is received to access desired memory at coordinate (b 1 , b 2 , . . . , b K ), where the coordinate of the latest memory units written is (a 1 , a 2 , . . . , a K ).
- FIG. 5 a block diagram illustrates an example of mapping data being written to a second tier map according to an example embodiment of the invention.
- Row 502 represents a range of physical addresses may be updated with each write.
- the concepts described in regards to FIG. 5 are equally applicable to non-continuous ranges, and physical addresses and/or address ranges need not be linked/dedicated to a particular logical addresses or address ranges.
- Each of rows 504 - 509 represent a snapshot of the memory associated with physical address space 502 at a particular time/state.
- each of the cells in the rows 504 - 509 may be considered to represent the smallest individually addressable unit within the containing device (e.g., page).
- these concepts may be equally applicable to other memory unit sizes both smaller and larger then a memory page (e.g., blocks, segments, words, etc.).
- only writes to single cells within the rows 504 - 509 are shown in this example, although embodiments of the invention are not limited to only single-unit writing.
- Columns 512 and 514 on the left hand side of FIG. 5 contain data that may be stored in and/or used by a top tier map for each write action in rows 504 - 509 .
- Column 512 contains an index to one of four mapping units that is targeted for writing for each of rows 504 - 509 .
- These indices 512 may be arbitrarily assigned, or may be based in whole or in part on a logical block address (e.g., four least significant bits of a logical block address).
- Column 514 contains the address of the last unit written for each row. For example, when data is written to an index in column 512 , the address pointed to in the previous row of column 514 is first read to determine the final physical address for that index, and then column 514 is updated with a newly assigned address after the write completes.
- a column 516 that indicates metadata included in the last written memory unit in each row 504 - 509 .
- This metadata 516 includes three physical addresses corresponding to three of the four indices that may be referenced in column 512 .
- the system/apparatus knows the index of the targeted mapping unit, e.g., the index of the targeted unit in 512 may be included in the top-tier map along with address data in 514 . In such a case, only the N ⁇ 1 addresses 516 of the other members of the group need be written to the mapping unit.
- the stored addresses 516 can be ordered from smallest to largest index, excluding the index of the last written mapping unit storing the addresses, e.g., the mapping unit referenced in column 514 . Other orderings may also be possible, e.g., largest to smallest index.
- mapping metadata in column 516 corresponding to row 504 is stored in mapping unit 518 , and this metadata 516 may at least include addresses of indices 1 , 3 , and 4 , in that order. As seen by the dark border surrounding unit 518 , this is the last memory unit to have been written, and so the physical address of memory unit 518 is shown in column 514 but not in the metadata 516 .
- mapping data shown in column 516 corresponding to the previous row 504 is accessed to find the physical address of the targeted index.
- the mapping data 516 shows that index 3 resides at physical address 29 . Because the storage media includes flash memory or the like, the new data is not directly overwritten to this address, but a new, empty memory unit is selected to receive the data.
- the new memory unit chosen to receive the data corresponds cell physical address 10 , and is indicated in the drawing as memory unit 520 in row 505 .
- the metadata 522 in column 516 which includes at least the addresses of indices 1 , 2 , and 4 , is also written to the mapping unit 520 . Accordingly, the top level map data in columns 512 and 514 may then be updated with the respective index and physical address.
- the index data in column 512 is already known because index 3 was targeted for writing. However the top-tier map may not be updated with this index (or the new physical address) until the write completes, so that the previous mapping data will be retained in the event of a write failure.
- the index data 512 may not be stored at all with the top tier map, and need not be updated. For example, the index may be derived from a bitmask of the logical address of the targeted memory, and can thereby be dynamically determined at the time the memory access request is received.
- mapping unit at address 29 (formerly index 3 in row 504 ) is shaded to indicate that the memory unit is stale/invalid.
- garbage collection may be invoked to make this memory unit available for programming.
- some or all of the valid data in addresses 502 may be copied elsewhere, and some or all of memory at addresses 502 may be erased for later reuse.
- the metadata in columns 514 and 516 may be changed appropriately to reflect to the new addresses, and these updates are then written to the respective top-tier map and memory units. Additional considerations regarding the garbage collection of multi-tiered mapping storage units is discussed further hereinbelow.
- FIG. 6 a flowchart illustrates a more detailed procedure 600 for reading a two-tier map according to an embodiment of the invention.
- a top tier mapping unit (MU) corresponding to the logical address being read is determined 602 .
- a lookup using a logical address may return a physical address, and this physical address need not be unique to the logical address (e.g., may be associated with a contiguous or arbitrary group of logical addresses).
- the index or other lower-tier identifier is also determined 604 .
- the lookup 602 for the top tier MU may also return the index/identifier, and/or the index/identifier may be obtained directly from data in the logical address itself.
- the forward map is read 606 to determine the physical address A LAST of the latest MU written. It will be appreciated that reading 606 of the forward map may be performed in conjunction with determining 602 the top-mapping unit, e.g., the output of 602 may include the address A LAST . If it is determined 608 that the latest MU written has the same index as the desired MU, the target physical address A TARGET is set 610 to the address A LAST of the latest MU written, and this address can then be read 612 . Otherwise, the physical address is read 614 to get the second tier map, which contains the physical address of all other top tier MU members, including the desired one.
- the probability of requiring a flash read is (N ⁇ 1)/N.
- very large sequential I/Os may only require a single flash read to determine the physical address of all the memory units in any top tier group.
- the probability of requiring a flash read on a per-request basis is at or near 1, because at least one other physical address may need to be read from the second tier metadata to perform the read.
- the ratio of reads on a per-mapping-unit basis approaches 1/N, e.g., only one read of the metadata is needed per N mapping units read.
- mapping unit may represent little or no overhead in terms of read operations compared to a situation where the entire forward map is maintained in RAM. It is likely that, for a large sequential read, the top-tier mapping unit of a group would be read anyway, whether the mapping unit contains any mapping metadata or not.
- FIG. 7 flowchart shows an example writing procedure 700 according to an embodiment of the invention. As with the read procedure 600 in FIG. 6 , this may involve determining 702 the top tier MU corresponding to the logical address being written, determining 704 the index for the desired MU within the top tier MU, and reading 706 the forward map to determine the physical address A LAST of the latest MU written. This location A LAST contains the latest second tier map, which needs to be updated to reflect the new location being written.
- the second tier map is read 708 and stored for later use.
- a new address A NEW is selected 710 for writing the new data.
- This new address A NEW as well as the old address A LAST and current index, can be used to update 712 the address map accordingly.
- the updated map and new/updated user data are written 714 to A NEW , and the top tier map is also updated 716 to reflect the new top tier mapping unit.
- the probability of requiring a flash read is 1, because the map needs to be updated regardless of which memory unit is targeted.
- the new data is written to all the MUs in the group and so the old second tier map may not be needed. In such a case, the probability of requiring a flash read approaches zero.
- mapping data With non-uniform workloads it may make sense to cache mapping data differently depending on whether the data is “cold” or “hot.”
- the term “hot” data generally refers data that has been recently experienced a high frequency of write access. Data that has not been changed frequently and/or recently is referred to as “cold” data. Additional indicators/levels of activity may also be defined, e.g., “warm.”
- a controller may cache the full map (e.g., all N-tiers of mapping data) hot data, and only cache the top tier map for the cooler data.
- FIG. 8 a flowchart illustrates a procedure 800 for garbage collection according to an example embodiment of the invention. Because of how flash memory is erased (e.g., by applying a relatively large reverse voltage on the memory cells) the erasure of the units is often performed on garbage collection units (GCUs), which are collections of multiple pages or other addressable memory units.
- GCUs garbage collection units
- This procedure 800 may first involve determining 802 every physical memory address A GCU in the GCU, and then determine 804 the corresponding logical address A L for each physical address.
- Square brackets are used to indicate that, e.g., A GCU [ ] is a collection of addresses.
- This annotation is used in some programming languages to indicate a particular collection, e.g., an array, although the present example need not be limited to arrays.
- a temporary map be used to determine the physical addresses A GCU [ ] based on the logical addresses A L [ ].
- This determination 804 may involve determining the logical address of a unit of data by directly reading that data. It will be appreciated that the data in the addresses A GCU [ ] may be stale, and so the forward map may need to be read for each logical address found A L [ ] in order to determine if the data is valid.
- This reading of the forward map may involve an iteration 806 for each logical address found.
- the physical address corresponding to the logical address is determined 807 . This determination may involve determining the corresponding top tier mapping unit and index for the logical address.
- forward map and second tier map are accessed to locate the physical address, A P2 . If the physical address A P2 determined from the forward map is equal to the physical address read at step 804 (here shown in block 808 as a lookup to the map M using the current logical address as the key) then A P2 is valid and added 810 to a list/collection of valid addresses.
- each of the valid addresses is iterated through 810 .
- a new memory location outside the current GCU is determined 814 .
- the second tier mapping is modified 816 to reflect the new location of the recycled day and write this updated second tier mapping with the data.
- the data with modified mapping information is then written 818 to the new location.
- the garbage collection procedure 800 reflects somewhat of a brute-force approach. While relatively straightforward to implement, it may require too much overhead in terms of flash reads and/or computation for some purposes.
- One alternative to this type of procedure is to maintain a directory of all the memory units stored in each GCU. This directory can be maintained in the controller while the GCU is being filled and then written to flash when full. The directory may not identify mapping units that are stale because of subsequent writes.
- One way to identify the stale mapping units in such a case is to have a validity map for all directory entries in each GCU. This can be kept in RAM, and can be volatile, since the validity information can be determined from the forward map with some additional delay, e.g., using the procedure 800 .
- the validity map for all GCUs may be kept very compact, e.g., requiring only 1 bit for each mapping unit.
- a GCU directory may also be useful for rebuilding the top tier map after a power loss.
- the time to rebuild may be limited by the RAM writing time rather than the flash reading time.
- the RAM writing time may be reduced significantly by consolidating all of the directory entries for the same top tier group within each GCU. By including the mapping information with the data, it is protected by the outer code and can be recovered in the event of a die failure.
- a timestamp may be used to identify the latest second tier map in each group. In such a case, a GCU directory could still be included as an additional redundancy. It may contain similar information as the time stamps, but in another format.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
Claims (20)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/840,938 US8341340B2 (en) | 2010-07-21 | 2010-07-21 | Multi-tier address mapping in flash memory |
PCT/US2011/044640 WO2012012505A1 (en) | 2010-07-21 | 2011-07-20 | Multi-tier address mapping in flash memory |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/840,938 US8341340B2 (en) | 2010-07-21 | 2010-07-21 | Multi-tier address mapping in flash memory |
Publications (2)
Publication Number | Publication Date |
---|---|
US20120023282A1 US20120023282A1 (en) | 2012-01-26 |
US8341340B2 true US8341340B2 (en) | 2012-12-25 |
Family
ID=44629307
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/840,938 Active 2031-08-01 US8341340B2 (en) | 2010-07-21 | 2010-07-21 | Multi-tier address mapping in flash memory |
Country Status (2)
Country | Link |
---|---|
US (1) | US8341340B2 (en) |
WO (1) | WO2012012505A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8924832B1 (en) * | 2012-06-26 | 2014-12-30 | Western Digital Technologies, Inc. | Efficient error handling mechanisms in data storage systems |
US9122587B2 (en) | 2013-03-06 | 2015-09-01 | Seagate Technology Llc | Self recovery in a solid state drive |
TWI512467B (en) * | 2014-09-02 | 2015-12-11 | Silicon Motion Inc | Methods for maintaining a storage mapping table and apparatuses using the same |
US9424129B2 (en) | 2014-04-24 | 2016-08-23 | Seagate Technology Llc | Methods and systems including at least two types of non-volatile cells |
US20170038985A1 (en) * | 2013-03-14 | 2017-02-09 | Seagate Technology Llc | Nonvolatile memory data recovery after power failure |
US9785368B1 (en) * | 2016-07-24 | 2017-10-10 | Nxp Usa, Inc. | System and method for mapping control and user data |
US9852068B2 (en) | 2015-03-04 | 2017-12-26 | Silicon Motion, Inc. | Method and apparatus for flash memory storage mapping table maintenance via DRAM transfer |
US10013174B2 (en) | 2015-09-30 | 2018-07-03 | Western Digital Technologies, Inc. | Mapping system selection for data storage device |
US10176212B1 (en) * | 2014-10-15 | 2019-01-08 | Seagate Technology Llc | Top level tier management |
US11188252B2 (en) | 2020-03-13 | 2021-11-30 | Seagate Technology Llc | Data storage system with adaptive cache management |
US11649496B2 (en) | 2015-06-19 | 2023-05-16 | IntegenX, Inc. | Valved cartridge and system |
Families Citing this family (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9645943B2 (en) * | 2011-02-22 | 2017-05-09 | Infinidat Ltd. | Storage arrangement and method using a plurality of storage spaces which have separate control layers and separate mapping structures |
US9507639B2 (en) * | 2012-05-06 | 2016-11-29 | Sandisk Technologies Llc | Parallel computation with multiple storage devices |
US10282286B2 (en) * | 2012-09-14 | 2019-05-07 | Micron Technology, Inc. | Address mapping using a data unit type that is variable |
US8548900B1 (en) | 2012-12-19 | 2013-10-01 | Nyse Group, Inc. | FPGA memory paging |
US9158673B2 (en) | 2013-01-22 | 2015-10-13 | International Business Machines Corporation | Use of differing granularity heat maps for caching and migration |
US9552288B2 (en) | 2013-02-08 | 2017-01-24 | Seagate Technology Llc | Multi-tiered memory with different metadata levels |
US9299402B2 (en) * | 2013-02-08 | 2016-03-29 | Seagate Technology Llc | Mobile personalized boot data |
US9286225B2 (en) | 2013-03-15 | 2016-03-15 | Saratoga Speed, Inc. | Flash-based storage system including reconfigurable circuitry |
US9304902B2 (en) * | 2013-03-15 | 2016-04-05 | Saratoga Speed, Inc. | Network storage system using flash storage |
US9117520B2 (en) * | 2013-06-19 | 2015-08-25 | Sandisk Technologies Inc. | Data encoding for non-volatile memory |
US9117514B2 (en) * | 2013-06-19 | 2015-08-25 | Sandisk Technologies Inc. | Data encoding for non-volatile memory |
US9489294B2 (en) | 2013-06-19 | 2016-11-08 | Sandisk Technologies Llc | Data encoding for non-volatile memory |
US9489299B2 (en) | 2013-06-19 | 2016-11-08 | Sandisk Technologies Llc | Data encoding for non-volatile memory |
US9489300B2 (en) | 2013-06-19 | 2016-11-08 | Sandisk Technologies Llc | Data encoding for non-volatile memory |
US9390008B2 (en) | 2013-12-11 | 2016-07-12 | Sandisk Technologies Llc | Data encoding for non-volatile memory |
US9509604B1 (en) | 2013-12-31 | 2016-11-29 | Sanmina Corporation | Method of configuring a system for flow based services for flash storage and associated information structure |
US9608936B1 (en) | 2014-07-03 | 2017-03-28 | Sanmina Corporation | Network system with offload services for flash storage |
US9672180B1 (en) | 2014-08-06 | 2017-06-06 | Sanmina Corporation | Cache memory management system and method |
US9384147B1 (en) | 2014-08-13 | 2016-07-05 | Saratoga Speed, Inc. | System and method for cache entry aging |
US9715428B1 (en) | 2014-09-24 | 2017-07-25 | Sanmina Corporation | System and method for cache data recovery |
US10503635B2 (en) | 2016-09-22 | 2019-12-10 | Dell Products, Lp | System and method for adaptive optimization for performance in solid state drives based on segment access frequency |
CN107329909B (en) * | 2017-06-27 | 2020-07-07 | 郑州云海信息技术有限公司 | Data management method and device |
FR3072476A1 (en) * | 2017-10-13 | 2019-04-19 | Proton World International N.V. | MEMORY LOGIC UNIT FOR FLASH MEMORY |
US10860474B2 (en) | 2017-12-14 | 2020-12-08 | Micron Technology, Inc. | Multilevel addressing |
US10592427B2 (en) * | 2018-08-02 | 2020-03-17 | Micron Technology, Inc. | Logical to physical table fragments |
CN110895445B (en) * | 2018-09-12 | 2021-09-14 | 华为技术有限公司 | Data processing method and system |
KR20200044461A (en) * | 2018-10-19 | 2020-04-29 | 에스케이하이닉스 주식회사 | Memory system and operating method thereof |
US11636069B2 (en) * | 2020-07-24 | 2023-04-25 | Capital Thought Holdings L.L.C. | Data storage system and method |
CN112506440A (en) * | 2020-12-17 | 2021-03-16 | 杭州迪普信息技术有限公司 | Data searching method and equipment based on dichotomy |
Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6477612B1 (en) * | 2000-02-08 | 2002-11-05 | Microsoft Corporation | Providing access to physical memory allocated to a process by selectively mapping pages of the physical memory with virtual memory allocated to the process |
US20030037037A1 (en) | 2001-08-17 | 2003-02-20 | Ec Outlook, Inc. | Method of storing, maintaining and distributing computer intelligible electronic data |
US20030079104A1 (en) | 2001-10-24 | 2003-04-24 | Bethard Roger A. | System and method for addressing memory and transferring data |
US20060294339A1 (en) | 2005-06-27 | 2006-12-28 | Trika Sanjeev N | Abstracted dynamic addressing |
US20080034259A1 (en) | 2006-07-12 | 2008-02-07 | Gwon Hee Ko | Data recorder |
US20080189490A1 (en) | 2007-02-06 | 2008-08-07 | Samsung Electronics Co., Ltd. | Memory mapping |
US20080195797A1 (en) | 2007-02-13 | 2008-08-14 | Itay Sherman | Interface for extending functionality of memory cards |
US20080272807A1 (en) | 2007-03-15 | 2008-11-06 | Ovonyx, Inc. | Thin film logic device and system |
US20080320214A1 (en) | 2003-12-02 | 2008-12-25 | Super Talent Electronics Inc. | Multi-Level Controller with Smart Storage Transfer Manager for Interleaving Multiple Single-Chip Flash Memory Devices |
US20090083478A1 (en) | 2007-03-28 | 2009-03-26 | Kabushiki Kaisha Toshiba | Integrated memory management and memory management method |
US20090254572A1 (en) | 2007-01-05 | 2009-10-08 | Redlich Ron M | Digital information infrastructure and method |
US20090303783A1 (en) | 2008-06-06 | 2009-12-10 | Ovonyx, Inc. | Thin film input/output |
US20090313418A1 (en) | 2008-06-11 | 2009-12-17 | International Business Machines Corporation | Using asymmetric memory |
US20100023682A1 (en) | 2007-10-11 | 2010-01-28 | Super Talent Electronics Inc. | Flash-Memory System with Enhanced Smart-Storage Switch and Packed Meta-Data Cache for Mitigating Write Amplification by Delaying and Merging Writes until a Host Read |
US20100030946A1 (en) | 2008-07-30 | 2010-02-04 | Hitachi, Ltd. | Storage apparatus, memory area managing method thereof, and flash memory package |
US20100064111A1 (en) | 2008-09-09 | 2010-03-11 | Kabushiki Kaisha Toshiba | Information processing device including memory management device managing access from processor to memory and memory management method |
US20100070735A1 (en) | 2008-09-16 | 2010-03-18 | Micron Technology, Inc. | Embedded mapping information for memory devices |
US20100125708A1 (en) * | 2008-11-17 | 2010-05-20 | International Business Machines Corporation | Recursive Logical Partition Real Memory Map |
US20110022818A1 (en) * | 2009-07-24 | 2011-01-27 | Kegel Andrew G | Iommu using two-level address translation for i/o and computation offload devices on a peripheral interconnect |
US20110283048A1 (en) * | 2010-05-11 | 2011-11-17 | Seagate Technology Llc | Structured mapping system for a memory device |
US20120166759A1 (en) * | 2005-06-08 | 2012-06-28 | Micron Technology, Inc. | Robust index storage for non-volatile memory |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7457909B2 (en) * | 2005-01-14 | 2008-11-25 | Angelo Di Sena | Controlling operation of flash memories |
-
2010
- 2010-07-21 US US12/840,938 patent/US8341340B2/en active Active
-
2011
- 2011-07-20 WO PCT/US2011/044640 patent/WO2012012505A1/en active Application Filing
Patent Citations (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6477612B1 (en) * | 2000-02-08 | 2002-11-05 | Microsoft Corporation | Providing access to physical memory allocated to a process by selectively mapping pages of the physical memory with virtual memory allocated to the process |
US20030037037A1 (en) | 2001-08-17 | 2003-02-20 | Ec Outlook, Inc. | Method of storing, maintaining and distributing computer intelligible electronic data |
US20030079104A1 (en) | 2001-10-24 | 2003-04-24 | Bethard Roger A. | System and method for addressing memory and transferring data |
US20080320214A1 (en) | 2003-12-02 | 2008-12-25 | Super Talent Electronics Inc. | Multi-Level Controller with Smart Storage Transfer Manager for Interleaving Multiple Single-Chip Flash Memory Devices |
US20120166759A1 (en) * | 2005-06-08 | 2012-06-28 | Micron Technology, Inc. | Robust index storage for non-volatile memory |
US20060294339A1 (en) | 2005-06-27 | 2006-12-28 | Trika Sanjeev N | Abstracted dynamic addressing |
US20080034259A1 (en) | 2006-07-12 | 2008-02-07 | Gwon Hee Ko | Data recorder |
US20090254572A1 (en) | 2007-01-05 | 2009-10-08 | Redlich Ron M | Digital information infrastructure and method |
US20080189490A1 (en) | 2007-02-06 | 2008-08-07 | Samsung Electronics Co., Ltd. | Memory mapping |
US20080195797A1 (en) | 2007-02-13 | 2008-08-14 | Itay Sherman | Interface for extending functionality of memory cards |
US20080272807A1 (en) | 2007-03-15 | 2008-11-06 | Ovonyx, Inc. | Thin film logic device and system |
US20090083478A1 (en) | 2007-03-28 | 2009-03-26 | Kabushiki Kaisha Toshiba | Integrated memory management and memory management method |
US20100023682A1 (en) | 2007-10-11 | 2010-01-28 | Super Talent Electronics Inc. | Flash-Memory System with Enhanced Smart-Storage Switch and Packed Meta-Data Cache for Mitigating Write Amplification by Delaying and Merging Writes until a Host Read |
US20090303783A1 (en) | 2008-06-06 | 2009-12-10 | Ovonyx, Inc. | Thin film input/output |
US20090313418A1 (en) | 2008-06-11 | 2009-12-17 | International Business Machines Corporation | Using asymmetric memory |
US20100030946A1 (en) | 2008-07-30 | 2010-02-04 | Hitachi, Ltd. | Storage apparatus, memory area managing method thereof, and flash memory package |
US20100064111A1 (en) | 2008-09-09 | 2010-03-11 | Kabushiki Kaisha Toshiba | Information processing device including memory management device managing access from processor to memory and memory management method |
US20100070735A1 (en) | 2008-09-16 | 2010-03-18 | Micron Technology, Inc. | Embedded mapping information for memory devices |
US20100125708A1 (en) * | 2008-11-17 | 2010-05-20 | International Business Machines Corporation | Recursive Logical Partition Real Memory Map |
US20110022818A1 (en) * | 2009-07-24 | 2011-01-27 | Kegel Andrew G | Iommu using two-level address translation for i/o and computation offload devices on a peripheral interconnect |
US20110283048A1 (en) * | 2010-05-11 | 2011-11-17 | Seagate Technology Llc | Structured mapping system for a memory device |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9626118B2 (en) * | 2012-06-26 | 2017-04-18 | Western Digital Technologies, Inc. | Efficient error handling mechanisms in data storage systems |
US9208020B2 (en) | 2012-06-26 | 2015-12-08 | Western Digital Technologies, Inc. | Efficient error handling mechanisms in data storage systems |
US8924832B1 (en) * | 2012-06-26 | 2014-12-30 | Western Digital Technologies, Inc. | Efficient error handling mechanisms in data storage systems |
US20160085470A1 (en) * | 2012-06-26 | 2016-03-24 | Western Digital Technologies, Inc. | Efficient error handling mechanisms in data storage systems |
US9122587B2 (en) | 2013-03-06 | 2015-09-01 | Seagate Technology Llc | Self recovery in a solid state drive |
US10048879B2 (en) * | 2013-03-14 | 2018-08-14 | Seagate Technology Llc | Nonvolatile memory recovery after power failure during write operations or erase operations |
US20170038985A1 (en) * | 2013-03-14 | 2017-02-09 | Seagate Technology Llc | Nonvolatile memory data recovery after power failure |
US9424129B2 (en) | 2014-04-24 | 2016-08-23 | Seagate Technology Llc | Methods and systems including at least two types of non-volatile cells |
US9846643B2 (en) | 2014-09-02 | 2017-12-19 | Silicon Motion, Inc. | Methods for maintaining a storage mapping table and apparatuses using the same |
TWI512467B (en) * | 2014-09-02 | 2015-12-11 | Silicon Motion Inc | Methods for maintaining a storage mapping table and apparatuses using the same |
US10176212B1 (en) * | 2014-10-15 | 2019-01-08 | Seagate Technology Llc | Top level tier management |
US9852068B2 (en) | 2015-03-04 | 2017-12-26 | Silicon Motion, Inc. | Method and apparatus for flash memory storage mapping table maintenance via DRAM transfer |
US11649496B2 (en) | 2015-06-19 | 2023-05-16 | IntegenX, Inc. | Valved cartridge and system |
US10013174B2 (en) | 2015-09-30 | 2018-07-03 | Western Digital Technologies, Inc. | Mapping system selection for data storage device |
US9785368B1 (en) * | 2016-07-24 | 2017-10-10 | Nxp Usa, Inc. | System and method for mapping control and user data |
US11188252B2 (en) | 2020-03-13 | 2021-11-30 | Seagate Technology Llc | Data storage system with adaptive cache management |
Also Published As
Publication number | Publication date |
---|---|
US20120023282A1 (en) | 2012-01-26 |
WO2012012505A1 (en) | 2012-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8341340B2 (en) | Multi-tier address mapping in flash memory | |
US11232041B2 (en) | Memory addressing | |
US8595424B2 (en) | Cluster based non-volatile memory translation layer | |
US8417878B2 (en) | Selection of units for garbage collection in flash memory | |
US7395384B2 (en) | Method and apparatus for maintaining data on non-volatile memory systems | |
US7890550B2 (en) | Flash memory system and garbage collection method thereof | |
US8635399B2 (en) | Reducing a number of close operations on open blocks in a flash memory | |
US8751731B2 (en) | Memory super block allocation | |
JP4611024B2 (en) | Method and apparatus for grouping pages in a block | |
US8694754B2 (en) | Non-volatile memory-based mass storage devices and methods for writing data thereto | |
US8041884B2 (en) | Controller for non-volatile memories and methods of operating the memory controller | |
US7917479B2 (en) | Non-volatile memory devices, systems including same and associated methods | |
US11314586B2 (en) | Data storage device and non-volatile memory control method | |
US11334480B2 (en) | Data storage device and non-volatile memory control method | |
US11392489B2 (en) | Data storage device and non-volatile memory control method | |
US11080203B2 (en) | Data storage device and control method for non-volatile memory | |
US20200412379A1 (en) | Data storage device and non-volatile memory control method | |
TWI766194B (en) | Data storage device and non-volatile memory control method | |
US11416151B2 (en) | Data storage device with hierarchical mapping information management, and non-volatile memory control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:RUB, BERNARDO;REEL/FRAME:024721/0478 Effective date: 20100720 |
|
AS | Assignment |
Owner name: THE BANK OF NOVA SCOTIA, AS ADMINISTRATIVE AGENT, Free format text: SECURITY AGREEMENT;ASSIGNOR:SEAGATE TECHNOLOGY LLC;REEL/FRAME:026010/0350 Effective date: 20110118 |
|
AS | Assignment |
Owner name: THE BANK OF NOVA SCOTIA, AS ADMINISTRATIVE AGENT, CANADA Free format text: SECURITY AGREEMENT;ASSIGNORS:SEAGATE TECHNOLOGY LLC;EVAULT, INC. (F/K/A I365 INC.);SEAGATE TECHNOLOGY US HOLDINGS, INC.;REEL/FRAME:029127/0527 Effective date: 20120718 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT, CALIFORNIA Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNORS:SEAGATE TECHNOLOGY LLC;EVAULT, INC. (F/K/A I365 INC.);SEAGATE TECHNOLOGY US HOLDINGS, INC.;REEL/FRAME:029253/0585 Effective date: 20120718 Owner name: THE BANK OF NOVA SCOTIA, AS ADMINISTRATIVE AGENT, Free format text: SECURITY AGREEMENT;ASSIGNORS:SEAGATE TECHNOLOGY LLC;EVAULT, INC. (F/K/A I365 INC.);SEAGATE TECHNOLOGY US HOLDINGS, INC.;REEL/FRAME:029127/0527 Effective date: 20120718 Owner name: WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATE Free format text: SECOND LIEN PATENT SECURITY AGREEMENT;ASSIGNORS:SEAGATE TECHNOLOGY LLC;EVAULT, INC. (F/K/A I365 INC.);SEAGATE TECHNOLOGY US HOLDINGS, INC.;REEL/FRAME:029253/0585 Effective date: 20120718 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: SEAGATE TECHNOLOGY US HOLDINGS, INC., CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:067471/0955 Effective date: 20240516 Owner name: EVAULT, INC. (F/K/A I365 INC.), CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:067471/0955 Effective date: 20240516 Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:067471/0955 Effective date: 20240516 |
|
AS | Assignment |
Owner name: EVAULT INC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:068457/0076 Effective date: 20240723 Owner name: SEAGATE TECHNOLOGY LLC, CALIFORNIA Free format text: RELEASE BY SECURED PARTY;ASSIGNOR:WELLS FARGO BANK, NATIONAL ASSOCIATION, AS COLLATERAL AGENT;REEL/FRAME:068457/0076 Effective date: 20240723 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |