Woodstock Blog

a tech blog for general algorithmic interview questions

[Design] Distributed Hash Table

ref

Distributed hash table

A distributed hash table (DHT) is a class of a decentralized distributed system that provides a lookup service similar to a hash table. (key,value) pairs are stored in a DHT, and any participating node can efficiently retrieve the value associated with a given key.

对于一个key/value对,DHT在分布式集群中,提供像HashTable一样的服务,例如简单快捷的存取、查询。

DHTs form an infrastructure that can be used to build more complex services, such as anycast, cooperative Web caching, distributed file systems, domain name services, instant messaging, multicast, and also peer-to-peer file sharing and content distribution systems.

Properties

Unlike unstructured P2P, DHT is tightly coupled between nodes and file locations. (when request a content, directly go to the content instead of searching by flooding)

DHT has the following properties:

  1. Autonomy and Decentralization: the nodes collectively form the system without any central coordination.

  2. Fault tolerance: the system should be reliable (in some sense) even with nodes continuously joining, leaving, and failing.

  3. Scalability: the system should function efficiently even with thousands or millions of nodes.

Building a DHT

  1. Hash function that maps a file to a unique ID. Eg. hash(“Harry Potter”) -> 3912.
  2. Distribute range space for all nodes in the network.
  3. The desinated node stores the location of the file. (this is indirect approach)

Search in DHT

  1. Search query routed to the node whose range covers the file.
  2. Each node would retains a routing information that is implemented in a fully distributed manner (i.e. no central point, no single point of failure).

There is different hashing and routing techniques associated with DHT. The most important is Consistent Hashing and Chord Routing.

Consistent Hashing

Consistent hashing is a special kind of hashing such that when a hash table is resized and consistent hashing is used, only K/n keys need to be remapped on average, where K is the number of keys, and n is the number of slots.

Motivation

In most traditional hash tables, a change in the number of array slots causes nearly all keys to be remapped. Specifically, the 3 cases below can end up in a technology crisis:

  1. leaves/failures - 一个 cache 服务器 m down 掉了(在实际应用中必须要考虑这种情况),这样所有映射到 cache m 的对象都会失效,怎么办,需要把 cache m 从 cache 中移除,这时候 cache 是 N-1 台,映射公式变成了 hash(object)%(N-1);

  2. join - 由于访问加重,需要添加 cache ,这时候 cache 是 N+1 台,映射公式变成了 hash(object)%(N+1)

  3. scalability - 由于硬件能力越来越强,你可能想让后面添加的节点多做点活,显然上面的 hash 算法也做不到。

Technique

Consistent hashing is based on mapping each object to a point on the edge of a circle. The system maps each available machine to pseudo-randomly distributed points on the edge of the same circle.

  1. 假定哈希key均匀的分布在一个环上
  2. 所有的节点也都分布在同一环上
  3. 每个节点只负责一部分Key,当节点加入、退出时只影响加入退出的节点和其邻居节点或者其他节点只有少量的Key受影响

For a very detailed steps of consistent hashing, read this Chinese blog.

In this way, 一致性Hash在node加入/离开时,不会导致映射关系的重大变化。

Routing (searching)

Simple Routing would search successor node, and runtime is linear. These node would keep O(1) routing information, and spend O(n) time in query routing.

Otherwise, we make every node store ID and IP of all nodes, thus query routing takes O(1) but routing information is O(n).

We’ll now discuss Chord Routing.

Chord Routing

Each node stores more info closely following it on the identifier circle than nodes further away. That is, the subsequent nodes at position 1, 2, 4, 8, 16, 32… (each entry is called a finger)

为网络中每个Node分配一个唯一id(可以通过机器的mac地址做Hash),假设整个网络有N 个节点,我们可以认为这些整数首尾相连形成一个环,称之为Chord环。两个节点间的距离定义为节点间下标差,每个节点会存储一张路由表(Finger表),表内顺时针按照离本节点2、4、8、16、32.……2i的距离选定log2N个其他节点的ip信息来记录。

Routing information maintained at each node: O(logN).

Query routing take O(logN) time.

Join and leave in Chord

It’s very much like insertion and removal in Doubly Linked List. Read it yourself.

Special thanks to the online resources written by some CSDN bloggers.

[Design] Cloud, Grid and Cluster

ref

Cluster VS. Grid

Cluster differs from Cloud and Grid in that

  1. a cluster is a group of computers connected by a local area network (LAN)
  2. cloud and grid are more wide scale and can be geographically distributed.

Another way to put it is to say that

  1. a cluster is tightly coupled
  2. a Grid or a cloud is loosely coupled.

Also, the hardware:

  1. clusters are made up of machines with similar hardware
  2. clouds and grids are made up of machines with possibly very different hardware configurations.

Computer cluster

A computer cluster consists of a set of tightly connected computers that work together as a single system.

Unlike grid computers, computer clusters have each node set to perform the same task, controlled and scheduled by software.

In most circumstances, all of the nodes use the same hardware and OS. They are connected in LAN with each node running its own piece of OS.

They are used to improve performance and availability over that of a single computer, while being more cost effective.

Computer clusters emerged as a result of:

  1. low-cost microprocessors,
  2. high-speed networks,
  3. software for high-performance distributed computing

Grid computing

Grid computing is the collection of computer resources from multiple locations to reach a common goal. Each computer have non-interactive workloads. It’s like a “super virtual computer” composed of many networked loosely coupled computers acting together to perform large tasks.

Unlike conventional high performance computing systems such as cluster computing, grid computers have each node set to perform a different task.

Grid computers also tend to be more geographically dispersed than cluster computers. Grids are often constructed with general-purpose grid middleware software libraries.

Major disadvantage with Grid Computing is, if one piece of software on a node fails, other pieces of the software on the other nodes may fail.

Cloud computing

Cloud computing is terminology based on utility and consumption of computing resources.

SaaS

An application doesn’t access resources it requires directly, rather it accesses them through something like a service. Instead of talking to a specific hard drive for storage, and a specific CPU for computation, etc, it talks to some service that provides these resources. The service then maps any requests for resources to its physical resources.

The services themselves have long been referred to as Software as a Service (SaaS). The datacenter hardware and software is what we call a Cloud. When a Cloud is made available in a pay-as-you-go manner to the general public, we call it a Public Cloud; the service being sold is Utility Computing.

Sharing of Resources

Usually the service dynamically allocate resources to maximize the effectiveness of the shared resources(per users and per demand).

For example, a cloud computer facility that serves European users during European business hours with a specific application (e.g., email) may reallocate the same resources to serve North American users during North America’s business hours with a different application (e.g., a web server).

With cloud computing, multiple users can access a single server to retrieve and update their data without purchasing licenses for different applications.

Scalability

  1. If an application requires only a small amount of some resource, say computation, then the service only allocates a small amount, say a small share on a single physical CPU.

  2. If the application requires a large amount of some resource, then the service allocates that large amount, say a grid of CPUs.

All the complex handling and coordination is performed by the service, not the application. In this way the application can scale well.

For example a web site written “on the cloud” may share a server with many other web sites while it has a low amount of traffic. If it ever has massive amounts of traffic, it may be moved to its own dedicated server, or grid of servers. This is all handled by the cloud service (provider), so the application shouldn’t have to be modified drastically to cope.

Other Advantages

Using Cloud Computing, companies can scale upto High capacities immediately without investing in new infrastructure, training the people or new software licensing. It is more useful for small and medium scale businesses who wants to outsource their Data Center infrastructure, or some larger companies also prefer if they want to cut down the costs of building data-centers internally in order to get peak load capacity. In short, consumers use what they need and pay accordingly.

Grid Computing VS. Cloud Computing

  1. Grid Computing is the parent of Cloud computing, cloud actually evolves from Grid Computing.

  2. A cloud would usually use a grid. A grid is not necessarily a cloud.

  3. Resource distribution: Cloud computing is a centralized model whereas grid computing is a decentralized model where the computation could occur over many administrative domains.

  4. Ownership: A grid is a collection of computers which is owned by multiple parties in multiple locations and connected together so that users can share the combined power of resources. Whereas a cloud is a collection of computers usually owned by a single party.

Examples

Examples of Clouds: Amazon Web Services (AWS), Google App Engine, Dropbox, Gmail, Facebook, Youtube, Rapidshare

Examples of Grids: FutureGrid

[Design] Big Data Storage

ref

Question

Given 1 trillion messages on fb and each message has at max 10 words.

How do you build the index table and how many machines do you need on the cluster to store the index table?

One possible answer

Total data = 1 trillion * 10 words * 6 bytes / word = 60TB = one small NetApp box

Index by hashed userid; will distribute traffic effectively across servers; cache active users recent messages in memory.

Cannot use Netapp box. From what I read in FB engg blog, they have all the info in main memory of server.

Total data = 1 trillion * 10 words * 6 bytes / word = 60TB + 1TB for Indexes.

Considering servers have 64 GB ram. 61 GB usable to store index, 1000 servers.

For more information

Read 2 other posts: [Design] Distributed hash table and [Design] Cloud, Grid and Cluster.

[Facebook] Write a Json Prettifier

Question

link

Input: {“firstName”:“John”,“lastName”:“Smith”,“isAlive”:true,“age” :25,“height_cm”:167.6,“address”:{“streetAddress”:“212ndStre et”,“city”:“NewYork”,“state”:“NY”,“postalCode”:“10021-3100” },“phoneNumbers”:[{“type”:“home”,“number”:“212555-1234”},{“ type”:“office”,“number”:“646555-4567”}],“children”:[],“spou se”:null}

Output:

{
  "firstName": "John",
  "lastName": "Smith",
  "isAlive": true,
  "age": 25,
  "height_cm": 167.6,
  "address": {
    "streetAddress": "21 2nd Street",
    "city": "New York",
    "state": "NY",
    "postalCode": "10021-3100"
  },
  "phoneNumbers": [
    {
      "type": "home",
      "number": "212 555-1234"
    },
    {
      "type": "office",
      "number": "646 555-4567"
    }
  ],
  "children": [],
  "spouse": null
}

Solution

Since I did not find anything useful online, I spent an hour writing the following code. These are the things to take into consideration:

  1. Line break after a comma, a left bracket, or prior to a right bracket. A special pattern is “},” - we only do line break once after the comma.

  2. Indentation is important. It’s a little complex cuz we reduce indentation BEFORE printing out a right bracket, but increase indentation AFTER the left bracket.

Code

public void prettify(String input) throws Exception {

    // observation the rules for Json format:
    // 1. each line end with either a { , or }
    // 2. indentation depends on number of brackets
    int len = input.length();
    int left = 0;
    int right = 0;
    int tab = 0;

    while (left < len) {
        // first, advance right pointer to the next line break point
        while (right < len) {
            if (input.charAt(right) == '}' || input.charAt(right) == ']') {
                // first case, if point to a closing bracket
                tab--;
                // indentation should change right away should we find a
                // closing bracket
                if (right + 1 < len && input.charAt(right + 1) != ',') {
                    break;
                }
            } else if (input.charAt(right) == ','
                    || input.charAt(right) == '{'
                    || input.charAt(right) == '[') {
                // second case, break at , or {
                break;
            } else if (right + 1 < len
                    && (input.charAt(right + 1) == '}' || input
                            .charAt(right + 1) == ']')) {
                // third case, break prior to }
                // we need not swap the order of first and third case,
                // because when we found a closing bracket, we need to
                // change indentation right away
                break;
            }
            right++;
        }

        // now print the chars from left pointer to right inclusively
        if (right == len) {
            // end of input
            if (tab != 0) {
                throw new Exception("Json format error!");
            }
            right--;
            // this is for the convenience of printing last line
        }
        printIndentation(tab);
        System.out.println(input.substring(left, right + 1));
        if (input.charAt(right) == '{' || input.charAt(right) == '[') {
            tab++;
        }

        // last, update pointers
        left = ++right;
    }
}

private void printIndentation(int tab) {
    for (int i = 0; i < tab; i++) {
        System.out.print("    ");
    }
}

[Question] Frog Crossing (Dynamic Programming)

Question

link

A frog wants to cross the river n meters wide. There’re some stones, but not complete from 1 to n.

The frog has a peculiar jumping rule. If it jumps x meters on one jump, the next jump has to lie in the range {x-1, x, x+1}. Further, the 1st jump that the frog makes is of exactly 1 meter.

Given whether stones are removed or not as array of 0 and 1, check if it is possible to reach the other end.

Solution

This is a difficult DP question. It can be solved in O(n2) time.

The main equation is:

can_reach [s, d] =

Stone[d] AND Stone[s] AND [ can_reach [(d-s)-1, s] OR can_reach[d-s, s] OR can_reach[d-s)+1, s] ].

where ’s' is starting point, and ’d' is destination.

Code

public boolean canCross(int[] stones) {
    if (stones.length < 2 || (stones[0] != 1 || stones[1] != 1)) {
        // invalid input data
        return false;
    }
    int n = stones.length;
    boolean[][] dp = new boolean[n][n];
    // dp[i][j] denotes that frog can jump from index i to j

    for (int i = 0; i < n; i++) {
        for (int j = i + 1; j < n; j++) {
            if (stones[i] == 0 || stones[j] == 0) {
                // if either stones i or stone j is removed, skip
                continue;
            }
            // note that j start from (i+1) because we make sure dp[i][i]
            // false. Otherwise, dp[i][i+1] will always be true

            if (i == 0) {
                // if jump from position 0, can only reach 1
                dp[i][j] = j == 1;
            } else {
                // if jump from other positions, need to check previous
                // distance, within range or not.
                int dis = j - i;
                for (int pre = i - dis - 1; pre <= i - dis + 1; pre++) {
                    // pre is the previous position where frog jumps to i
                    if (pre < 0) {
                        continue;
                    } else if (dp[pre][i]) {
                        // frog jumps from pre to i, then frog is able to
                        // jump from i to j
                        dp[i][j] = true;
                        break;
                    }
                }
            }
            // finish calculating dp[i][j], now check termination
            if (j == n - 1 && dp[i][j]) {
                return true;
            }
        }
    }
    return false;
}

[Design] Database Indexing

ref 1 ref 2

Why Indexing?

In database disks (we’re talking about disk-based storage devices), data is stored as blocks. These blocks are accessed atomically. It’s like a linked list with pointers to the blocks.

Because of this, searching in database is linear time. That’s why we need indexing.

Indexing is a way of sorting a number of records on multiple fields. Creating an index on a field in a table creates another data structure which holds the field value, and pointer to the record it relates to. This index structure is then sorted, allowing Binary Searches to be performed on it.

In other words, indexing speed up search.

Downside

First disadvantage is the additional space usage. For example, in MyISAM engine, indexes are stored together with the data in one table. So the indexing files can quickly reach the size limits if many fields are indexed.

Second disadvantage is that using too many indexes can actually slow your database down. Each time a page or database row is updated or removed, the reference or index also has to be updated.

So indexes speed up finding data, but slow down inserting, updating or deleting data.

Primary keys

Some fields are automatically indexed.

A primary key or a field marked as ‘unique’ – for example an email address, a userid or a social security number – are automatically indexed so the database can quickly check to make sure that you’re not going to introduce bad data.

When to use

Since indexes are only used to speed up the searching, it’s not wise to have indexing used only for output.

The general rule is, anything that is used to limit the number of results you’re trying to find. For more details, read ref 2.

How to create index

The following is SQL92 standard that’s supported by major RDMBSs:

CREATE INDEX [index name] ON [table name] ( [column name] )

[Java OOP] Java Runtime Exception

Exceptions in Java

In Java, there are 3 categories of exceptions:

  1. checked exceptions

    1. Typically a user error
    2. eg. if a file cannot be found.
    3. These exceptions cannot simply be ignored at the time of compilation.
  2. runtime exceptions (also called un-checked exceptions)

    1. exception that could have been avoided by the programmer
    2. eg. NullPointerException
    3. ignored at the time of compilation
  3. error

    1. These are not exceptions at all
    2. eg. stack overflow occurs
    3. also ignored at the time of compilation

1. checked exception

A checked exception must be handled explicitly by the code (by either putting a try/catch block around the code, or adding a “throws” clause to the method).

The class Exception and any subclasses that are not also subclasses of RuntimeException are checked exceptions. Example:

  1. FileNotFoundException
  2. HttpRetryException
  3. SocketException
  4. IOException

Note: java.lang.RuntimeException is a subclass of java.lang.Exception.

2. un-checked exceptions (RuntimeException)

A un-checked exception does not need to be explicitly handled.

RuntimeException and its subclasses are unchecked exceptions.

Generally RuntimeExceptions can be prevented programmatically. E.g NullPointerException, ArrayIndexOutOfBoundException. If you check for null before calling any method, NullPointerException would never occur. Similarly ArrayIndexOutOfBoundException would never occur if you check the index first. RuntimeException are not checked by the compiler, so it is clean code.

The runtime exception classes are exempted from compile-time checking, since the compiler cannot establish that run-time exceptions cannot occur.

[Ruby] Endless Error With Gem Dependencies

The problem

In setting up the octopress environment, we need to install dependencies like this:

gem install bundler
bundle install

First line simply installs bundler (a Ruby dependency manager). The second line is where you might face problems:

bundle install

Install the dependencies specified in your Gemfile

Look at the Gemfile in your octopress folder, it looks like this:

source "https://rubygems.org"

group :development do
  gem 'rake', '~> 10.0'
  gem 'jekyll', '~> 2.0'
  gem 'octopress-hooks', '~> 2.2'
  gem 'octopress-date-format', '~> 2.0'
  gem 'jekyll-sitemap'
  gem 'rdiscount', '~> 2.0'
  gem 'RedCloth', '~> 4.2.9'
  gem 'haml', '~> 4.0'
  gem 'compass', '~> 1.0.1'
  gem 'sass-globbing', '~> 1.0.0'
  gem 'rubypants', '~> 0.2.0'
  gem 'rb-fsevent', '~> 0.9'
  gem 'stringex', '~> 1.4.0'
end

gem 'sinatra', '~> 1.4.2'

When I do ‘bundle install’:

$ bundle install
Fetching gem metadata from https://rubygems.org/........
Resolving dependencies...

Gem::RemoteFetcher::FetchError: SSL_connect returned=1 errno=0 state=SSLv3 read
server certificate B: certificate verify failed (https://rubygems.org/gems/rake-
10.4.2.gem)
An error occurred while installing rake (10.4.2), and Bundler cannot continue.
Make sure that `gem install rake -v '10.4.2'` succeeds before bundling.

It shows the gem ‘rake’ is missing. So I did as I was instrcucted to. I type gem install rake -v ‘10.4.2’ into command line, and execute. Again, I was told another gem missing. I kept doing this, until I found this is an endless procedure:

$ bundle install
Fetching gem metadata from https://rubygems.org/.......
Using rake 10.3.2
Using RedCloth 4.2.9
Using blankslate 2.1.2.4

Gem::RemoteFetcher::FetchError: SSL_connect returned=1 errno=0 state=SSLv3 read
server certificate B: certificate verify failed (https://rubygems.org/gems/timer
s-1.1.0.gem)
An error occurred while installing timers (1.1.0), and Bundler cannot continue.
Make sure that `gem install timers -v '1.1.0'` succeeds before bundling.

Something is very wrong with the execution of ‘bundle install’.

The solution

Until I found this post, I was hopelessly searching and learning Ruby gems and related things.

I fact, I just deleted 1 letter by changing source “https://rubygems.org to source “http://rubygems.org at line 1 of Gemfile in octopress folder.

Then, everything is fine:

$ bundle install
Fetching gem metadata from http://rubygems.org/........
Resolving dependencies....
Installing rake 10.4.2
Using RedCloth 4.2.9
Using blankslate 2.1.2.4
Installing hitimes 1.2.2
Installing timers 4.0.1
Installing celluloid 0.16.0
Installing chunky_png 1.3.3
Installing fast-stemmer 1.0.2
Installing classifier-reborn 2.0.2
Installing coffee-script-source 1.8.0
Installing execjs 2.2.2
Installing coffee-script 2.3.0
Installing colorator 0.1
Installing multi_json 1.10.1
Installing sass 3.4.9
Installing compass-core 1.0.1
Installing compass-import-once 1.0.5
Installing rb-fsevent 0.9.4
Installing ffi 1.9.6
Installing rb-inotify 0.9.5
Installing compass 1.0.1
Installing tilt 1.4.1
Installing haml 4.0.6
Installing jekyll-coffeescript 1.0.1
Installing jekyll-gist 1.1.0
Installing jekyll-paginate 1.1.0
Installing jekyll-sass-converter 1.3.0
Installing listen 2.8.4
Installing jekyll-watch 1.2.0
Installing kramdown 1.5.0
Installing liquid 2.6.1
Installing mercenary 0.3.5
Installing posix-spawn 0.3.9
Installing yajl-ruby 1.1.0
Installing pygments.rb 0.6.0
Installing redcarpet 3.2.2
Installing safe_yaml 1.0.4
Installing parslet 1.5.0
Installing toml 0.1.2
Installing jekyll 2.5.3
Installing jekyll-sitemap 0.7.0
Installing octopress-hooks 2.2.3
Installing octopress-date-format 2.0.2
Installing rack 1.6.0
Installing rack-protection 1.5.3
Installing rdiscount 2.1.7.1
Installing rubypants 0.2.0
Installing sass-globbing 1.0.0
Installing sinatra 1.4.5
Installing stringex 1.4.0
Using bundler 1.7.9
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.
Post-install message from compass:
    Compass is charityware. If you love it, please donate on our behalf at http:
//umdf.org/compass Thanks!
Post-install message from haml:

HEADS UP! Haml 4.0 has many improvements, but also has changes that may break
your application:

* Support for Ruby 1.8.6 dropped
* Support for Rails 2 dropped
* Sass filter now always outputs <style> tags
* Data attributes are now hyphenated, not underscored
* html2haml utility moved to the html2haml gem
* Textile and Maruku filters moved to the haml-contrib gem

For more info see:

http://rubydoc.info/github/haml/haml/file/CHANGELOG.md

I guess this is because my network security settings, but I do not know. If you are facing the same issue, I hope my solution helped. Or if you know the reason, please tell me by leaving a comment below. Thanks!

[Java OOP] Override/overload Java Main Method

Can we overload main method in Java?

Can. But only public static void main(String[] args) will be used when your class is launched by the JVM.

You can call other main() method yourself from code.

Eg.

class Simple{  
  public static void main(int a){  
  System.out.println(a);  
  }  

  public static void main(String args[]){  
  System.out.println("main() method invoked");  
  main(10);  
  }  
}

output:

main() method invoked
10

Can we override main method in Java?

No.

MAIN is a class method. Hence, it does not makes sense to “override” it (or any static method). The concept of “overriding” is only for instance methods.

[Java OOP] Common Root of Java Classes

Common Root Class

java.lang.Object, link

All Java classes are derived from this common root class, that defines common behaviors.

Common behaviors include multi-threading and garbage collector etc.

Some methods

ref

protected Object clone() throws CloneNotSupportedException

Creates and returns a copy of this object.

public boolean equals(Object obj)

Indicates whether some other object is “equal to” this one.

protected void finalize() throws Throwable

Called by the garbage collector on an object when garbage collection determines that there are no more references to the object

public final Class getClass()

Returns the runtime class of an object.

public int hashCode()

Returns a hash code value for the object.

public String toString()

Returns a string representation of the object.

Sync-related methods

The notify, notifyAll, and wait methods of Object all play a part in synchronizing the activities of independently running threads in a program.

public final void notify()
public final void notifyAll()
public final void wait()
public final void wait(long timeout)
public final void wait(long timeout, int nanos)