Reimagining Data Protection Strategies for Burton Snowboards

Slide Note
Embed
Share

Burton Snowboards faces challenges with traditional data protection methods due to the large amount of marketing collateral data. The existing infrastructure relies on tape backups and complex recovery processes, signaling the need for a more efficient approach that leverages big data experience and new technology implementations.


Uploaded on Sep 12, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. How Burton Snowboards is Carving Down the OpenStack Trail Jim Merritt / Systems Engineer / Burton Snowboards

  2. How big is big-data?

  3. Starting Point The Data ~250TB of various structured and unstructured data Databases (ERP, DW, misc) are actually small percentage Consisting mainly of marketing collateral (e.g. video, photos, and other media) Protect all of the data, and allow for data growth Traditional methods of data protection becoming expensive both administratively and monetarily How do we recover? Becoming a difficult question to answer

  4. Starting Point The Infrastructure Two SAN/NAS storage arrays Disk-to-disk-to-tape data protection architecture Tape library 120 slots, LTO5 Off-site storage facility for tapes Off-site facility to host disaster recovery hardware Commvault data-protection software Mix of server hardware (Dell, Cisco UCS, HP) Mix of operating systems (Windows Server, Linux, Solaris) Linux distributions SuSE, CentOS, debian VMWare ESXi

  5. The Old Way Storage systems used for both primary storage and as backup target Complicated processes for data protection and even more complicated to recover Relied on shipping of LTO tapes to off-site facility Difficult to execute disaster recovery procedure

  6. Technical (and not so Technical) Issues Traditional data protection model Lots of data in flight, raw data, intermediate, AUX copies, copies of the copies, Lots of tapes to manage Tape library/drive maintenance Little De-duplication in use, lots of data in-flight Video and images don t dedup well Deduplication can be expensive Integration timing between different data silos Primary storage, LTO tape drives (SAN) Network (NDMP, SMB,NFS) Oracle (RMAN->NFS) Data Management/Curation Administrative effort Complicated backup Complicated recovery

  7. Planning for change Leverage past experience with big data Petabyte scale data management and protection Concept of raw and intermediate data Familiarity with several object store solutions Built out test implementations Turns out that our data problems are the same on a smaller scale Large amount of static unstructured data Old data had large value and had to be retained

  8. Our Solution Deploy OpenStack Swift as backup target Utilize a remote site as an additional object store location Utilize commodity hardware and networking as appropriate Archive old unstructured data Inadvertently have disaster recovery strategy Eliminate or drastically reduce tape management Utilize SwiftStack to reduce deployment and on-going maintenance and management

  9. Hardware/Software Implementation OpenStack Swift implementation with 2 regions, 3 zones, 3 object nodes per zone and 2 proxy-account-container nodes Each zone is its own rack with separate power and network One region is located at our main data center, and one region is location at a co-lo facility Commvault cloud libraries created for dedup and non-dedup data SwiftStack utilized for cluster deployment & management SwiftStack CIFS/NFS gateway utilized for archive storage access Virtualized system

  10. Server Hardware Object Nodes 3 - Silicon Mechanics Storform v518.v5P CentOS 7 64GB RAM 2 E5-2620v3 1 - 120GB SSD (operating system), 32 - 4TB SATA (object storage) 6 Silicon Mechanics Storform v518.v4 CentOS 7 Proxy/Account/Container Nodes 2 Silicon Mechanics R345.v4 CentOS 7 128GB RAM 2 x E5-2650v2 2x 250GB (operating system), 3 x 200TB SSD (account/container storage) Netgear 10GbE switch 1 per zone SSL offload / Load-Balance Virtualized - haproxy system, CentOS7 SwiftStack Gateway Virtualized

  11. New DP\DR Storage Infrastructure Commvault media servers connect to cloud library via haproxy Archive storage ingest via CIFS/NFS via the SwiftStack Gateway End-users access archive data via the SwiftStack Gateway Hosted version of SwftStack controller

  12. Paradigm Shift New infrastructure required procedural shift Wild-west, data goes anywhere mentality to a more structured data placement Place data in dated structures put some initial structure to the unstructured data Data marked for archive only moved once into object store Only require a primary copy of backup data No auxiliary copy created in Commvault for off-site retention Use native database backup methods to write once via CIFS/NFS gateway Oracle RMAN in-testing, MS-SQL production Commserv DR and dedup database backups go into object store for access in case of DR.

  13. Backup Software Configuration This is the only additional configuration required for Commvault

  14. Network Traffic Swift Input clients to swift proxy Swift WAN between regions

  15. A year after initial deployment Commvault After initial issues with commvault, this has been working well Commvault version10, SP11 was the first good swift storage capable version We currently have 160TB of backup data in the object store Using default 3-replica policy Currently archive 25TB At this time we move a dated folder to a separate container in swift and create read-only CIFS access via the gateway Much easier/reliable recovery process

  16. and next steps Erasure-code container Archive more data into object store Metadata search Integration with our ELK stack for auditing

  17. Jim Merritt Senior Systems Engineer Burton Snowboards jimm@burton.com Thank You

Related


More Related Content