Understanding DDoS Attacks and Defense Strategies

Slide Note
Embed
Share

In computing, DDoS attacks aim to disrupt machine or network services by overwhelming resources. This article delves into the concept of DDoS, application-level attacks, botnets, defense mechanisms like profiling and rate-limiting, and the effectiveness of the "Speak-Up" approach in mitigating DDoS threats.


Uploaded on Oct 08, 2024 | 0 Views


Download Presentation

Please find below an Image/Link to download the presentation.

The content on the website is provided AS IS for your information and personal use only. It may not be sold, licensed, or shared on other websites without obtaining consent from the author. Download presentation by click this link. If you encounter any issues during the download, it is possible that the publisher has removed the file from their server.

E N D

Presentation Transcript


  1. DDoS Defense by Offense Michael Walfish, MythiliVutukuru, Hari Balakrishnan, David Karger, Scott Shenker SIGCOMM 06 Presented by: Atul Bohara Subho Banerjee 4thDecember 14

  2. What is DDoS ? In computing, a denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users Focus on Application-level DoS Attackers send legitimate-looking requests Overload resources like CPU, disk (not link) [Figures from original SIGCOMM presentation]

  3. What is DDoS ? In computing, a denial-of-service (DoS) or distributed denial-of-service (DDoS) attack is an attempt to make a machine or network resource unavailable to its intended users Focus on Application-level DoS [Figures from original SIGCOMM presentation]

  4. Botnets Network of Compromised and commandeered hosts Can be used to launch DDoS, send Spam e-mails Detection Primitive bots: Profiling IP addresses Sophisticated bots: hard to detect, long-term profiling by ISPs may reduce them R. Gummadi, et al. Not-a-Bot: improving service availability in the face of botnet attacks. NSDI'09

  5. DDoS Defenses Detect and block attackers CAPTCHAs [Morein et al. 03, Gligor 03, Kandula et al. 05] Profiling [Mazu et al. 06] Capabilities [Yaar et al. 04, Yang et al. 05] Detect, then Deny Rate-limiting [Fair Queuing, Banga et al. 99, Kandula et al. 05] Currency based approaches Proof-of-work [Dwork & Naor 92, Juels & Brainard 99, Aura et al. 00, Mankins et al. 01, Wang & Reiter 03, Hashcash] Dilute attackers (make clients repeat requests) [Gunter et al. 04, Sherr et al. 05] Taxation without identification

  6. Speak-Up When a server is under attack, encourage all clients to send more traffic to the server

  7. But why Speak-up ? Mitigate application-level attacks Hard to filter Hard to rate-limit explicitly Botnets not much larger than good clientele Traditional ? Relative size of botnet Speak-up Difficulty of filtering, rate-limiting

  8. Applicability of Speak-up Bandwidth required for clients ? Clients gain, no matter how much B/W they have Aggregate B/W required for clients ? Depends on server s spare capacity Efficacy for small-sized clientele ? Suffers if clientele << botnet Traffic load on network ? Attacks are rare Core is already over-provisioned (assumption)

  9. Assumptions Size(clientele) >> Size(botnet) Network assumptions: Over-provisioning Adequate link bandwidth Adequate client bandwidth Desirable properties No pre-defined clientele No-human clientele Unequal requests or spoofing or smart-bots Example scenario: Web-server

  10. Goal: Bandwidth-Proportional Allocation Greed-proportional allocation Attackers get bulk of the server Bandwidth-proportional allocation Allocate units of service based on client B/W Approximately fair-allocation [Figures from original SIGCOMM presentation]

  11. Why Bandwidth-Proportional ? B/W is measurable and clients can t fake Provided clients are forced to consume it Attackers max-out B/W but good clients do not Basic idea: taxation without identification (bandwidth as a currency) Server cannot differentiate good and bad clients Improves share of good clients in the traffic mix

  12. Speak-up Design Thinner (server front end) to handle server overload Limit rate of requests to the server Encourage clients Allocate proportional to bandwidth

  13. Speak-up Design (contd.) Thinner: under server overload Admits requests periodically Sends encouragements to dropped clients Server Thinner

  14. Speak-up Design (contd.) Thinner: under server overload Admits requests periodically Sends encouragements to dropped clients Which requests to admit? Request Congestion controlled stream of dummy bits Request Server Thinner

  15. Speak-up Design (contd.) Thinner: under server overload Admits requests periodically Sends encouragements to dropped clients Which requests to admit? Highest sender Others keep sending and eventually win Request Congestion controlled stream of dummy bits Request Server Thinner

  16. Encouragement Strategies Random drops and aggressive retries Explicit payment channel Rate limiting Drop requests at random Retry signal to dropped client Price for access is number of retries Admit winner of auction Establish separate payment channel Virtual auction Encouragement Allocation

  17. Robustness to Cheating Theorem: In a system with regular service interval, any client that continuously transmits an fraction of the average bandwidth received by the thinner gets at least an /2 fraction of the service, regardless of how the bad clients time or divide the bandwidth. Theorem can be weaker and stronger than what is seen in practice Weaker: regular service interval, constant bandwidth payment channel Stronger: no assumption about adversary

  18. Speak-up Implementation Client Proxy (thinner) Let requests through when not congested Send JS + form as encouragement when congested Ends POST after client wins Sends request Client constructs 1MByte string with JS script POSTs string in form POST request is the payment channel [Figures from original SIGCOMM presentation]

  19. Evaluation Implementation roughly meets its goals Speak-up defended clients are little behind ideal allocation Always better than un- defended

  20. Evaluation Validating the thinner s allocation Good clients do best when the server has ability to process all requests Service proportional to bandwidth even when server can t process all requests

  21. Evaluation Increased Latency of clients sending dummy bytes With a large admit rate, cost isn t very high Even with a small rate, worst added delay is 1second. Better than getting no service during an attack!

  22. Evaluation Good and bad client sharing a bottleneck link Clients behind the linkcapture half of the server s capacity Good clients behind the link suffer Effect on good clients greater when B/W behind bottleneck is larger

  23. Evaluation Impact of Speak-up on other traffic High impact on non- speak-up client Inflation in latency 6x for 1KB 4.5x for 64KB Pessimistic experiment RTTs are large Bottleneck bandwidth is restrictive

  24. Discussion How often do the conditions hold ? Attacks moving toward application-level Proxies widespread; IP addr stealing happens Botnet size vs good clientele size Many less than 10k or even smaller [Symantec, Rajab et al. 06, Arbor, LADS, McCarty 03] Anecdotally, botnets getting smaller Smaller botnets will drive smarter attacks Many sites have access to a lot of bandwidth

  25. Discussion Some objections to Speak-up Won t it harm the network? Inflation only in traffic to attacked sites Clients have unequal bandwidth True: speak-up is only roughly fair, possible solution using proxies Bandwidth envy Only more better off during attacks ISPs could offer high B/W proxies to low B/W clients Mobile devices sending dummy packets Power usage ? Pricing ?

  26. Summary DDoS evolving Traditional methods (detection, rate-limiting) less effective Taxation fairer than explicit identification Speak-up is proposed for application-level attacks Allocates server according to client B/W Speak-up trades B/W for server computation

Related


More Related Content