Performance Isolation and Fairness for Multi-Tenant Cloud Storage


Download 1.56 Mb.
Sana28.12.2022
Hajmi1.56 Mb.
#1016325
Bog'liq
pisces-osdi12-slides

  • David Shue*, Michael Freedman*, and Anees Shaikh✦
  • *Princeton ✦IBM Research

Setting: Shared Storage in the Cloud

  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • S3
  • EBS
  • SQS
  • Shared Key-Value Storage
  • DD
  • DD
  • DD
  • DD
  • Shared Key-Value Storage
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • Multiple co-located tenants ⇒ resource contention
  • DD
  • DD
  • DD
  • DD
  • DD
  • DD
  • DD
  • DD
  • Shared Key-Value Storage
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • Fair queuing
  • @ big iron
  • Multiple co-located tenants ⇒ resource contention
  • Predictable Performance is Hard
  • Distributed system ⇒ distributed resource allocation
  • Multiple co-located tenants ⇒ resource contention
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • SS
  • SS
  • SS
  • SS
  • Predictable Performance is Hard
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • SS
  • SS
  • SS
  • SS
  • popularity
  • data partition
  • Multiple co-located tenants ⇒ resource contention
  • Distributed system ⇒ distributed resource allocation
  • Z
  • keyspace
  • T
  • keyspace
  • F
  • keyspace
  • Y
  • keyspace
  • Predictable Performance is Hard
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • SS
  • SS
  • SS
  • SS
  • Skewed object popularity ⇒ variable per-node demand
  • Multiple co-located tenants ⇒ resource contention
  • Distributed system ⇒ distributed resource allocation
  • 1kB
  • GET
  • 10B
  • GET
  • 1kB
  • SET
  • 10B
  • SET
  • (small reads)
  • (large reads)
  • (large writes)
  • (small writes)
  • Predictable Performance is Hard
  • Zynga
  • Yelp
  • Foursquare
  • TP
  • Shared Key-Value Storage
  • Tenants Want System-wide Resource Guarantees
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • SS
  • SS
  • SS
  • SS
  • 80 kreq/s
  • 120 kreq/s
  • 160 kreq/s
  • 40 kreq/s
  • Hard limits ⇒ lower utilization
  • demandz = 120 kreq/s
  • ratez = 80 kreq/s
  • demandf = 120 kreq/s
  • ratef = 120 kreq/s
  • Skewed object popularity ⇒ variable per-node demand
  • Multiple co-located tenants ⇒ resource contention
  • Distributed system ⇒ distributed resource allocation
  • Disparate workloads ⇒ different bottleneck resources
  • Zynga
  • Yelp
  • Foursquare
  • TP
  • Shared Key-Value Storage
  • Pisces Provides Weighted Fair-shares
  • wz = 20%
  • wy = 30%
  • wf = 40%
  • wt = 10%
  • demandz = 30%
  • ratez = 30%
  • demandf = 30%
  • Z
  • Y
  • T
  • F
  • Z
  • Y
  • F
  • T
  • Y
  • Y
  • Z
  • F
  • F
  • F
  • ratef = 30%
  • SS
  • SS
  • SS
  • SS
  • Skewed object popularity ⇒ variable per-node demand
  • Multiple co-located tenants ⇒ resource contention
  • Distributed system ⇒ distributed resource allocation
  • Disparate workloads ⇒ different bottleneck resources

Pisces: Predictable Shared Cloud Storage

  • Pisces
    • Per-tenant max-min fair shares of system-wide resources ~ min guarantees, high utilization
    • Arbitrary object popularity
    • Different resource bottlenecks
  • Amazon DynamoDB
    • Per-tenant provisioned rates ~ rate limited, non-work conserving
    • Uniform object popularity
    • Single resource (1kB requests)

Predictable Multi-Tenant Key-Value Storage

  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • RS
  • FQ
  • GET 1101100
  • RR
  • Controller
  • PP

Predictable Multi-Tenant Key-Value Storage

  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • RS
  • FQ
  • PP
  • WA
  • WA2 WB2
  • GET 1101100
  • RR
  • Controller
  • WA2 WB2
  • WA1 WB1
  • Strawman: Place Partitions Randomly
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • PP
  • RS
  • FQ
  • WA
  • WA2 WB2
  • Controller
  • RR
  • Strawman: Place Partitions Randomly
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • PP
  • RS
  • FQ
  • WA
  • WA2 WB2
  • RR
  • Controller
  • Overloaded
  • Pisces: Place Partitions By Fairness Constraints
  • Bin-pack partitions
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • PP
  • RS
  • FQ
  • WA
  • WA2 WB2
  • RR
  • Collect per-partition
  • tenant demand
  • Controller
  • Pisces: Place Partitions By Fairness Constraints
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • PP
  • RS
  • FQ
  • WA
  • WA2 WB2
  • RR
  • Controller
  • Controller
  • Strawman: Allocate Local Weights Evenly
  • WA1 = WB1
  • WA2 = WB2
  • excessive
  • demand
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • PP
  • WA
  • WA2 WB2
  • RR
  • RS
  • FQ
  • Overloaded
  • Pisces: Allocate Local Weights By Tenant Demand
  • A←B
  • A→B
  • WA1 > WB1
  • WA2 < WB2
  • max
  • mismatch
  • M/M/1
  • queuing delay
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • Compute per-tenant
  • +/- mismatch
  • PP
  • WA
  • WA2 WB2
  • Controller
  • WA1 = WB1
  • WA2 = WB2
  • Reciprocal weight swap
  • RR
  • RS
  • FQ
  • Strawman: Select Replicas Evenly
  • 50%
  • 50%
  • RS
  • PP
  • WA
  • WA2 WB2
  • Controller
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • WeightA
  • WeightB
  • GET 1101100
  • RR
  • WA1 > WB1
  • WA2 < WB2
  • FQ
  • Tenant A
  • Pisces: Select Replicas By Local Weight
  • 60%
  • 40%
  • detect weight
  • mismatch by
  • request latency
  • Tenant B
  • VM
  • VM
  • VM
  • WeightB
  • Controller
  • 50%
  • 50%
  • RS
  • PP
  • WA
  • WA2 WB2
  • VM
  • VM
  • VM
  • WeightA
  • GET 1101100
  • WA1 > WB1
  • WA2 < WB2
  • FQ
  • RR

Strawman: Queue Tenants By Single Resource

  • Bandwidth limited
  • Request Limited
  • bottleneck resource
  • (out bytes) fair share
  • out
  • req
  • out
  • req
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • Controller
  • RS
  • PP
  • WA
  • WA2 WB2
  • FQ
  • WA2 < WB2
  • WA1 > WB1
  • RR
  • GET 1101100
  • GET 0100111

Pisces: Queue Tenants By Dominant Resource

  • Bandwidth limited
  • Request Limited
  • out
  • req
  • out
  • req
  • Tenant A
  • Tenant B
  • VM
  • VM
  • VM
  • VM
  • VM
  • VM
  • bottlenecked by out bytes
  • Track per-tenant
  • resource vector
  • dominant resource
  • fair share
  • Controller
  • RS
  • PP
  • WA
  • WA2 WB2
  • FQ
  • WA2 < WB2
  • RR

Pisces Mechanisms Solve For Global Fairness

  • minutes
  • seconds
  • microseconds
  • Timescale
  • System Visibility
  • local
  • global
  • RS
  • RR
  • RR
  • ...
  • SS
  • SS
  • ...
  • Controller
  • feasible partition placement
  • demand-driven
  • weight allocation
  • weight-sensitive
  • selection policy
  • dominant resource
  • fair shares
  • Maximum bottleneck flow weight exchange
  • FAST-TCP basedreplica selection
  • Replica Selection
  • Policies
  • Weight Allocations
  • fairness and capacity
  • constraints
  • PP
  • WA
  • WA2 WB2
  • FQ

Evaluation

  • Does Pisces achieve (even) system-wide fairness?
    • Is each Pisces mechanism necessary for fairness?
    • What is the overhead of using Pisces?
  • Does Pisces handle mixed workloads?
  • Does Pisces provide weighted system-wide fairness?
  • Does Pisces provide local dominant resource fairness?
  • Does Pisces handle dynamic demand?
  • Does Pisces adapt to changes in object popularity?

Evaluation

  • Does Pisces achieve (even) system-wide fairness?
    • Is each Pisces mechanism necessary for fairness?
    • What is the overhead of using Pisces?
  • Does Pisces handle mixed workloads?
  • Does Pisces provide weighted system-wide fairness?
  • Does Pisces provide local dominant resource fairness?
  • Does Pisces handle dynamic demand?
  • Does Pisces adapt to changes in object popularity?

Pisces Achieves System-wide Per-tenant Fairness

  • Unmodified Membase
  • Ideal fair share: 110 kreq/s (1kB requests)
  • Pisces
  • 0.57 MMR
  • 0.98 MMR
  • Min-Max Ratio: min rate/max rate (0,1]
  • 8 Tenants - 8 Client - 8 Storage Nodes
  • Zipfian object popularity distribution

Each Pisces Mechanism Contributes to System-wide Fairness and Isolation

  • Unmodified Membase
  • 0.59 MMR
  • 0.93 MMR
  • 0.98 MMR
  • 0.36 MMR
  • 0.58 MMR
  • 0.74 MMR
  • RS
  • WA
  • PP
  • FQ
  • WA
  • PP
  • FQ
  • 0.90 MMR
  • 0.96 MMR
  • 0.97 MMR
  • 0.89 MMR
  • 0.57 MMR
  • FQ
  • 2x vs 1x demand
  • usually
  • sometimes
  • always
  • always
  • always
  • always
  • RS
  • RS
  • PP
  • WA
  • WA2 WB2
  • FQ
  • 0.64 MMR

Pisces Imposes Low-overhead

  • < 5%
  • > 19%

Pisces Achieves System-wide Weighted Fairness

  • 0.98 MMR
  • 4 heavy hitters
  • 40 low demand
  • 0.56 MMR
  • 0.89 MMR
  • 0.91 MMR
  • 0.91 MMR

Pisces Achieves Dominant Resource Fairness

  • Time (s)
  • Bandwidth (Mb/s)
  • GET Requests (kreq/s)
  • 76% of bandwidth
  • 76% of request rate
  • Time (s)
  • 1kB workload
  • bandwidth limited
  • 10B workload
  • request limited
  • 24% of request rate

Pisces Adapts to Dynamic Demand

  • Constant
  • Bursty
  • Diurnal (2x wt)
  • ~2x
  • even
  • Tenant
  • Demand

Conclusion

  • Pisces Contributions
    • Per-tenant weighted max-min fair shares of system-wide resources w/ high utilization
    • Arbitrary object distributions
    • Different resource bottlenecks
    • Novel decomposition into 4 complementary mechanisms
  • PP
  • Partition
  • Placement
  • WA
  • RS
  • FQ
  • Weight
  • Allocation
  • Replica
  • Selection
  • Fair
  • Queuing

Download 1.56 Mb.

Do'stlaringiz bilan baham:




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©fayllar.org 2024
ma'muriyatiga murojaat qiling