Advanced HDFS Features in Distributed Computing

Slide Note
Embed
Share

Explore the advanced features of Hadoop Distributed File System (HDFS) including Highly Available NameNode setup, HA NameNode Failover, ZooKeeper lock management, HDFS Federation benefits, and Federated NameNodes scalability beyond heap size. Learn about ensuring fault tolerance, performance, and scalability in distributed computing environments using these key features.


Uploaded on Nov 13, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. Etcetera! CMSC 491 Hadoop-Based Distributed Computing Spring 2016 Adam Shook

  2. Agenda Advanced HDFS Features Apache Cassandra Cluster Planning

  3. ADVANCED HDFS FEATURES

  4. Highly Available NameNode Highly Available NameNode feature eliminates SPOF Requires two NameNodes and some extra configuration Active/Passive or Active/Active Clients only contact the active NameNode DataNodes report in and heartbeat with both NameNodes Active NameNode writes metadata to a quorum of JournalNodes Standby NameNode reads the JournalNodes to stay in sync There is no CheckPointNode (SecondaryNameNode) The passive NameNode performs checkpoint operations

  5. HA NameNode Failover There are two failover scenarios Graceful Performed by an administrator for maintenance Automated Active NameNode fails Failed NameNode must be fenced Eliminates the 'split brain syndrome' Two fencing methods are available sshfence Kill NameNodes daemon shell script disables access to the NameNode, shuts down the network switch port, sends power off to the failed NameNode There is no 'default' fencing method

  6. ZooKeeper Lock Released Lock Created Release lock Create Lock NFS or QJM ZKFC ZKFC Shared NN State Become Active NN Active Standby NN Active NN I'm the Boss Data Node Data Node Data Node

  7. HDFS Federation Useful for: Isolation/multi-tenancy Horizontal scalability of HDFS namespace Performance Allows for multiple independent NameNodes using the same collection of DataNodes DataNodes store blocks from all NameNode pools

  8. Federated NameNodes File-system namespace scalable beyond heap size NameNode performance no longer a bottleneck NameNode failure/degradation is isolated Only data managed by the failed NameNode is unavailable Each NameNode can be made Highly Available

  9. Hadoop Security Hadoop's original design web crawler and indexing Not designed for processing of confidential data Small number of trusted users Access to cluster controlled by providing user accounts Little / no control on what a user could do once logged in HDFS permissions were added in the Hadoop 0.16 release Similar to basic UNIX file permissions HDFS permissions can be disabled via dfs.permissions Basically for protection against user-induced accidents Did not protect from attacks Authentication is accomplished on the client side Easily subverted via a simple configuration parameter

  10. Kerberos Kerberos support introduced in the Hadoop 0.22.2 release Developed at MIT / freely available Not a Hadoop-specific feature Not included in Hadoop releases Works on the basis of 'tickets' Allow communicating nodes to securely identify each other across unsecure networks Primarily a client/server model implementing mutual authentication The user and the server verify each other's identity

  11. How Kerberos Works Client forwards the username to KDC A. KDC sends Client/TGS Session Key, encrypted with user's password B. KDC issues a TGT, encrypted with TGS's key C. Sends B and service ID to TGS D. Authenticator encrypted w/A E. TGS issues CTS ticket, encrypted with SS key F. TGS issues CSS, encrypted w/A G. New authenticator encrypted with F H. Timestamp found in G+1 KDC - Key Distribution Center TGS Ticket Granting Service TGT Ticket Granting Ticket CTS Client-to-Server Ticket CSS Client Server Session Key

  12. Kerberos Services Authentication Server Authenticates client Gives client enough information to authenticate with Service Server Service Server Authenticates client Authenticates itself to client Provides services to client

  13. Kerberos Limitations Single point of failure Must use multiple servers Implement failback authentication mechanisms Strict time requirements 'tickets' are time stamped Clocks on all host must be carefully synchronized All authentication is controlled by the KDC Compromise of this infrastructure will allow attackers to impersonate any user Each network service requiring a different host name must have its own set of Kerberos keys Complicates virtual hosting of clusters

  14. APACHE CASSANDRA

  15. In a couple dozen words... Apache Cassandra is an open source, distributed, decentralized, elastically scalable, highly available, fault-tolerant, tunably consistent, column-oriented database with a lot of adjectives

  16. Overview Originally created by Facebook and opened sourced in 2008 Based on Google Big Table & Amazon Dynamo Massively Scalable Easy to use No relation to Hadoop Specifically, data is not stored on HDFS

  17. Distributed and Decentralized Distributed Can run on multiple machines Decentralized No single point of failure No master or slave issues by using a peer-to-peer architecture (gossip protocol, specifically) Can run across geographic datacenters

  18. Elastic Scalability Scales horizontally Adding nodes linearly increases performance Decreasing and increasing nodecounts happen seamlessly

  19. Highly Available and Fault Tolerant Multiple networked computers in a cluster Facility for recognizing node failures Forward failing over requests to another part of the system

  20. Tunable Consistency Choice between strong and eventual consistency Adjustable for reads and write operations separately Conflicts are solved during reads

  21. Column-Oriented Stored in spare multi- dimensional hash tables Row can have multiple columns, and not necessarily the same amount of columns for each row Each row has a unique key used for partitioning

  22. Query with CQL Familiar SQL-like syntax that maps to Cassandra's storage engine and simplifies data modeling CREATE TABLE songs ( id uuid PRIMARY KEY, title text, album text, artist text, data blob, tags set <text> ); INSERT INTO songs (id, title, artist, album, tags) VALUES ( 'a3e648f...', 'La Grange', 'ZZ Top', 'Tres Hombres', {'cool', 'hot'}); SELECT * FROM songs WHERE id = 'a3e648f...';

  23. When should I use this? Key features to compliment a Hadoop system: Geographical distribution Large deployments of structured data

  24. CLUSTER PLANNING

  25. Workload Considerations Balanced workloads Jobs are distributed across various job types CPU bound Disk I/O bound Network I/O bound Compute intensive workloads - Data Analytics CPU bound workloads require: Large numbers of CPU's Large amounts of memory to store in-process data I/O intensive workloads - Sorting I/O bound workloads require: Larger number of spindles ( disks ) per node Not sure go with balance workloads configuration

  26. Hardware Topology Hadoop uses a master / slave topology Master Nodes include: NameNode - maintains system metadata Backup NN- performs checkpoint operations and host standby ResourceManager- manages task assignment Slave Nodes include: DataNode - stores hdfs files / manages read and write requests Preferably co-located with TaskTracker NodeManager - performs map / reduce tasks

  27. Sizing The Cluster Remember... Scaling is a relatively simple task Start with a moderate sized cluster Grow the cluster as requirements dictate Develop a scaling strategy As simple as scaling is adding new nodes takes time and resources Don't want to be adding new nodes each week Amount of data typically defines initial cluster size rate at which the volume of data increases Drivers for determining when to grow your cluster Storage requirements Processing requirements Memory requirements

  28. Storage Reqs Drive Cluster Growth Data volume increases at a rate of 1TB / week 3TB of storage are required to store the data alone Remember block replication Consider additional overhead - typically 30% Remember files that are stored on a nodes local disk If DataNodes incorporate 4 - 1TB drives 1 new node per week is required 2 years of data - roughly 100TB will require 100 new nodes

  29. Things Break Things are going to break This assumption is a core premise of Hadoop If a disk fails, the infrastructure must accommodate If a DataNode fails, the NameNode must manage this If a task fails, the ApplicationMaster must manage this failure Master nodes are typically a SPOF unless using a Highly Available configuration NameNode goes down, HDFS is inaccessible Use NameNode HA ResourceManager goes down, can't run any jobs Use RM HA (in development)

  30. Cluster Nodes Cluster nodes should be commodity hardware Buy more nodes... Not more expensive nodes Workload patterns and cluster size drive CPU choice Small cluster - 50 nodes or less Quad core / medium clock speed is usually sufficient Large cluster Dual 8-core CPUs with a medium clock speed is sufficient Compute intensive workloads might require higher clock speeds General guideline is to buy more hardware instead of faster hardware Lots of memory - 48GB / 64GB / 128GB / 256GB Each map / reduce task consumes 1GB to 3GB of memory OS / Daemons consume memory as well

  31. Cluster Storage 4 to 12 drives of 1TB / 2TB capacity - up to 24TB / node 3TB drives work Network performance penalty if a node fails 7200 rpm SATA drives are sufficient Slightly above average MTBF is advantageous JBOD configuration RAID is slow RAID is not required due to block replication More smaller disks is preferred over fewer larger disks Increased parallelism for DataNodes Slaves should never use virtual memory

  32. Master Nodes Still commodity hardware, but... better Redundant everything Power supplies Dual Ethernet cards 16 to 24 CPU cores on NameNodes NameNodes and their clients are very chatty and need more cores to handle messaging traffic Medium clock speeds should be sufficient

  33. Master Nodes HDFS namespace is limited to the amount of memory on the NameNode RAID and NFS storage on NameNode Typically RAID5 with hot spare Second remote directory such as NFS Quorum Journal Manager for HA

  34. Network Considerations Hadoop is bandwidth intensive This can be a significant bottleneck Use dedicated switches 10Gb Ethernet is pretty good for large clusters

  35. Which Operating System? Choose an OS that you are comfortable and familiar with Consider you admin resources / experience RedHat Enterprise Linux Includes support contract CentOS No support but the price is right Many other possibilities SuSE Enterprise Linux Ubuntu Fedora

  36. Which Java Virtual Machine? Oracle Java is the only supported JVM Runs on OpenJDK, but use at your own risk Hadoop 1.0 requires Java JDK 1.6 or higher Hadoop 2.x requires Java JDK 1.7

  37. References http://cassandra.apache.org http://redis.io/ http://try.redis.io Give it a test drive! http://www.slideshare.net/jbellis/apache-cassandra- nosql-in-the-enterprise http://www.slideshare.net/planetcassandra/cassandra- introduction-features-30103666 http://research.microsoft.com/en- us/um/people/srikanth/netdb11/netdb11papers/netd b11-final12.pdf http://www.slideshare.net/miguno/apache-kafka-08- basic-training-verisign

Related


More Related Content