Project Voldemort


Project Voldemort

# Data is automatically replicated over multiple servers.
# Data is automatically partitioned so each server contains only a subset of the total data
# Server failure is handled transparently
# Pluggable serialization is supported to allow rich keys and values including lists and tuples with named fields, as well as to integrate with common serialization frameworks like Protocol Buffers, Thrift, and Java Serialization
# Data items are versioned to maximize data integrity in failure scenarios without compromising availability of the system
# Each node is independent of other nodes with no central point of failure or coordination
# Good single node performance: you can expect 10-20k operations per second depending on the machines, the network, and the replication factor
# Support for pluggable data placement strategies to support things like distribution across data centers that are geographical far apart.

A superscalable name-value pair database used by LinkedIn

Related Posts