ConcurrentHashMap分析

原创
2014/03/19 01:20
阅读数 1.6K

ConcurrentHashMap分析:

闻其名,便知其义,并发的hashmap, 我们先来看看ConcurrentHashMap数据结构图:

ConcurrentHashMap由多个Segment组成,而Segment内部是由HashEntry(存放key-value对)数组组成(类似于HashMap的Entry数组)。

从代码来看ConcurrentHashMap的基本属性:

//segment掩码值: 用于计算key所在segments索引值
final int segmentMask;
//segment偏移值: 用于计算key所在segments索引值   
final int segmentShift;
//segments数组,其内部也是由HashEntry数组实现,正因为有了多个segment,才提高了并发度   
final Segment<K,V>[] segments;
看到重要的Segment数据结构:
/**
 * 其实现了ReentrantLock, 自身可线程安全
 * 其本身就像个HashMap
 */
static final class Segment<K,V> extends ReentrantLock implements Serializable {
    //存放元素的table
    transient volatile HashEntry<K,V>[] table;
    //元素个数
    transient int count;
    //table resize阈值
    transient int threshold;
    //装载因子,默认0.75   
    final float loadFactor;
    ...
}
还是先从ConcurrentHashMap初始化工作开始说起:
public ConcurrentHashMap(int initialCapacity,
                             float loadFactor, int concurrencyLevel) {
    if (!(loadFactor > 0) || initialCapacity < 0 || concurrencyLevel <= 0)
            throw new IllegalArgumentException();
    if (concurrencyLevel > MAX_SEGMENTS) //并发级别,默认16,最大值为65536
            concurrencyLevel = MAX_SEGMENTS;
    // Find power-of-two sizes best matching arguments
    int sshift = 0;
    int ssize = 1; //segment数组的大小,必须是大于concurrentLevel且最小的2的指数
    while (ssize < concurrencyLevel) { //找到大于等于conrrencyLevel且为2的指数的最小ssize
         ++sshift;
         ssize <<= 1;
    }
    this.segmentShift = 32 - sshift; //segmentShift段偏移, 32即hashCode是int型(4字节32位),用来计算key所在segment下标
    this.segmentMask = ssize - 1; //segment段掩码:2^ssize - 1, 类似与子网掩码的道理,ssize默认16,掩码就是1111,用来计算key所在segment下标
    if (initialCapacity > MAXIMUM_CAPACITY) //初始化容量(segments数组),默认16
            initialCapacity = MAXIMUM_CAPACITY;
    int c = initialCapacity / ssize;
    if (c * ssize < initialCapacity)
        ++c;
    int cap = MIN_SEGMENT_TABLE_CAPACITY; //segment中的table数组大小,最小为2, 值也必须是2的指数倍
    while (cap < c)
        cap <<= 1;
    // create segments and segments[0]
    Segment<K,V> s0 =
        new Segment<K,V>(loadFactor, (int)(cap * loadFactor),
                         (HashEntry<K,V>[])new HashEntry[cap]); //创建segment[0],用于后面创建其他segment的模版
    Segment<K,V>[] ss = (Segment<K,V>[])new Segment[ssize]; //创建segments
    UNSAFE.putOrderedObject(ss, SBASE, s0); // ordered write of segments[0]
    this.segments = ss;
}

一些基本的操作实现put(), get(), remove(),size():

  • put操作实现:
public V put(K key, V value) {
    Segment<K,V> s;
    if (value == null)
        throw new NullPointerException(); //键值都不可为null
    int hash = hash(key); //计算key的hash值
    int j = (hash >>> segmentShift) & segmentMask; //计算key所在segment索引值,保证j值会在segments索引范围内
    if ((s = (Segment<K,V>)UNSAFE.getObject(segments, (j << SSHIFT) + SBASE)) == null)//若对应segment不存在
        s = ensureSegment(j); //创建segment
    return s.put(key, hash, value, false);
}
hash计算, 与HashMap有区别:
private int hash(Object k) {
     int h = hashSeed;

     if ((0 != h) && (k instanceof String)) {
         return sun.misc.Hashing.stringHash32((String) k);
     }

     h ^= k.hashCode();
     h += (h <<  15) ^ 0xffffcd7d;
     h ^= (h >>> 10);
     h += (h <<   3);
     h ^= (h >>>  6);
     h += (h <<   2) + (h << 14);
     return h ^ (h >>> 16);
}
ensureSegment方法:
private Segment<K,V> ensureSegment(int k) {
    final Segment<K,V>[] ss = this.segments;
    long u = (k << SSHIFT) + SBASE; // raw offset
    Segment<K,V> seg;
    if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u)) == null) {
         Segment<K,V> proto = ss[0]; // use segment 0 as prototype
         int cap = proto.table.length;
         float lf = proto.loadFactor;
         int threshold = (int)(cap * lf);
         HashEntry<K,V>[] tab = (HashEntry<K,V>[])new HashEntry[cap];
         if ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
               == null) { // recheck
                Segment<K,V> s = new Segment<K,V>(lf, threshold, tab); //创建新的segment
                while ((seg = (Segment<K,V>)UNSAFE.getObjectVolatile(ss, u))
                       == null) {
                    if (UNSAFE.compareAndSwapObject(ss, u, null, seg = s))
                        break;
                }
          }
    }
    return seg;
}
继续看Segment的put方法实现:
final V put(K key, int hash, V value, boolean onlyIfAbsent) {
    HashEntry<K,V> node = tryLock() ? null : //获取到了segment锁,node为null
          scanAndLockForPut(key, hash, value); //未获取到锁,则在等锁过程中先定位,构建新的node节点
    V oldValue;
    try {
        HashEntry<K,V>[] tab = table;
        int index = (tab.length - 1) & hash; //根据key的hash值计算key在table中的索引
        HashEntry<K,V> first = entryAt(tab, index); //获取第一个对应bucket的第一个HashEntry
        for (HashEntry<K,V> e = first;;) {
             if (e != null) { //该HashEntry已经有元素
                  K k;
                  if ((k = e.key) == key ||
                        (e.hash == hash && key.equals(k))) { //若key相等
                       oldValue = e.value;
                       if (!onlyIfAbsent) { //需要覆盖旧值
                            e.value = value;
                            ++modCount;
                       }
                       break;
                  }
                  e = e.next;
              } else { //找完整个HashEntry bucket链表都没有相等的元素,则插入
                 if (node != null) //若前面等待锁时,已经初始化了node
                      node.setNext(first); //添加到bucket链表头部
                 else //新建node
                      node = new HashEntry<K,V>(hash, key, value, first); 
                      int c = count + 1;
                      if (c > threshold && tab.length < MAXIMUM_CAPACITY)
                           rehash(node); //扩容
                      else
                           setEntryAt(tab, index, node); //插入新HashEntry到table的index下标位置
                      ++modCount;
                      count = c;
                      oldValue = null;
                      break;
              }
      }
    } finally {
      unlock(); //解锁该segment
    }
    return oldValue;
}
也可看看等锁过程scanAndLockForPut()方法:
private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
      HashEntry<K,V> first = entryForHash(this, hash); //该hash值对应的bucket链表的第一个节点
      HashEntry<K,V> e = first; 
      HashEntry<K,V> node = null;
      int retries = -1; // negative while locating node
      while (!tryLock()) { //未获取到锁继续尝试构建new node
           HashEntry<K,V> f; // to recheck first below
           if (retries < 0) { 
                    if (e == null) { //第一个节点为null, 表示该bucket index未被占用
                        if (node == null) // 创建新节点
                            node = new HashEntry<K,V>(hash, key, value, null);
                        retries = 0;
                    } else if (key.equals(e.key))//若找到相等的元素,就不用再尝试了
                        retries = 0;
                     else //继续看下一个节点
                        e = e.next;
            } else if (++retries > MAX_SCAN_RETRIES) { //尝试次数太多,就直接锁上,该值在cpu核数>1时为64次,否则为1次
                    lock();
                    break;
            } else if ((retries & 1) == 0 && 
                         (f = entryForHash(this, hash)) != first) { //若node新建了或找到相等,但是这时有可能在等锁过程,其他线程修改了头节点(那个节点hash后也在相同的bucket index)或删除该头节点
                    e = first = f; // re-traverse if entry changed
                    retries = -1;
            }
     }
     return node;
}

上面这个过程类似put里的过程,只是希望线程在被锁住了可以尽量提前做一些事情。

最后再来看看,扩容rehash的过程:

private void rehash(HashEntry<K,V> node) {
      HashEntry<K,V>[] oldTable = table;
      int oldCapacity = oldTable.length;
      int newCapacity = oldCapacity << 1; //扩容为原来的2倍
      threshold = (int)(newCapacity * loadFactor); //新的阈值
      HashEntry<K,V>[] newTable =
            (HashEntry<K,V>[]) new HashEntry[newCapacity];
      int sizeMask = newCapacity - 1; // table掩码
      for (int i = 0; i < oldCapacity ; i++) {
          HashEntry<K,V> e = oldTable[i];
          if (e != null) {
                HashEntry<K,V> next = e.next;
                int idx = e.hash & sizeMask;
                if (next == null)   //在该bucket上只有一个节点,则直接添加到新table里
                    newTable[idx] = e;
                else { // 该bucket链表上不止一个节点,则保持整个链表重用
                    HashEntry<K,V> lastRun = e;
                    int lastIdx = idx;
                    for (HashEntry<K,V> last = next; //找到该bucket链上最后一个节点
                         last != null;
                         last = last.next) {
                         int k = last.hash & sizeMask;
                         if (k != lastIdx) {
                             lastIdx = k;
                             lastRun = last;
                         }
                    }
                    newTable[lastIdx] = lastRun; //赋值该bucketin最后一个节点
                    //依次克隆该bucket链表上的所有节点
                    for (HashEntry<K,V> p = e; p != lastRun; p = p.next) {
                         V v = p.value;
                         int h = p.hash;
                         int k = h & sizeMask;
                         HashEntry<K,V> n = newTable[k];
                         newTable[k] = new HashEntry<K,V>(h, p.key, v, n);
                    }
                }
            }
        }
        int nodeIndex = node.hash & sizeMask; //添加新的节点
        node.setNext(newTable[nodeIndex]);
        newTable[nodeIndex] = node;
        table = newTable;
  }

这个put操作就简略说了,继续看看get方法吧。

  • get操作实现, 比较好明白:
public V get(Object key) {
    Segment<K,V> s; // manually integrate access methods to reduce overhead
    HashEntry<K,V>[] tab;
    int h = hash(key.hashCode());//根据key的hashCode计算hash值
    long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE; //得到其key所在segment下标
    if ((s = (Segment<K,V>)UNSAFE.getObjectVolatile(segments, u)) != null &&
         (tab = s.table) != null) {
         for (HashEntry<K,V> e = (HashEntry<K,V>) UNSAFE.getObjectVolatile //获取该key对应segment.table中的第一个节点
                     (tab, ((long)(((tab.length - 1) & h)) << TSHIFT) + TBASE);
             e != null; e = e.next) { //遍历bucket链表,比较,返回
             K k;
             if ((k = e.key) == key || (e.hash == h && key.equals(k)))
                  return e.value;
         }
    }
    return null;
}
  • remove操作实现:
public V remove(Object key) {
     int hash = hash(key.hashCode()); //计算hash值
     Segment<K,V> s = segmentForHash(hash); //定位segment
     return s == null ? null : s.remove(key, hash, null);
}

private Segment<K,V> segmentForHash(int h) {
     long u = (((h >>> segmentShift) & segmentMask) << SSHIFT) + SBASE;
     return (Segment<K,V>) UNSAFE.getObjectVolatile(segments, u);
}

Segment中的remove方法,基本就是链表的删除操作:

final V remove(Object key, int hash, Object value) {
    if (!tryLock()) //请求锁
         scanAndLock(key, hash); //尝试获取锁
     V oldValue = null;
     try {
         HashEntry<K,V>[] tab = table;
         int index = (tab.length - 1) & hash; //根据hash计算元素所在table的索引
         HashEntry<K,V> e = entryAt(tab, index); //获取该元素
         HashEntry<K,V> pred = null;
         while (e != null) {
              K k;
              HashEntry<K,V> next = e.next;
              if ((k = e.key) == key ||
                     (e.hash == hash && key.equals(k))) {
                 V v = e.value;
                 if (value == null || value == v || value.equals(v)) {
                       if (pred == null) //删除元素头节点
                            setEntryAt(tab, index, next);
                       else  //将删除节点的前一个节点--->删除节点的下一个节点
                            pred.setNext(next);
                       ++modCount;
                       --count;
                       oldValue = v;
                  }
                  break;
              }
              pred = e;
              e = next;
           }
     } finally {
         unlock();//解锁
     }
     return oldValue;
}
  • 最后看看size操作实现,这个是有点麻烦的,因为很有可能很多线程都在添加或删除操作segments, 看看怎么统计size:
public int size() {
        // Try a few times to get accurate count. On failure due to
        // continuous async changes in table, resort to locking.
        final Segment<K,V>[] segments = this.segments;
        int size;
        boolean overflow; // true if size overflows 32 bits
        long sum;         // sum of modCounts
        long last = 0L;   // previous sum
        int retries = -1; // first iteration isn't retry
        try {
            for (;;) {
                if (retries++ == RETRIES_BEFORE_LOCK) { //在对每个segment加锁前先尝试不加锁(假设没有线程写操作),默认尝试2次
                    for (int j = 0; j < segments.length; ++j)
                        ensureSegment(j).lock(); //加锁
                }
                sum = 0L;
                size = 0;
                overflow = false;
                for (int j = 0; j < segments.length; ++j) {
                    Segment<K,V> seg = segmentAt(segments, j);
                    if (seg != null) {
                        sum += seg.modCount;
                        int c = seg.count;
                        if (c < 0 || (size += c) < 0)
                            overflow = true;
                    }
                }
                if (sum == last)
                    break;
                last = sum;
            }
        } finally {
            if (retries > RETRIES_BEFORE_LOCK) { //说明尝试失败,要解锁
                for (int j = 0; j < segments.length; ++j)
                    segmentAt(segments, j).unlock();
            }
        }
        return overflow ? Integer.MAX_VALUE : size;
}
上面就分析了ConcurrentHashMap的一些基本操作,还是比较有意思的,可能你会看到这里面有很多UNSAFE相关的操作,这是非jdk核心库的一个类,闻其名,就不安全,但jdk里很多都会用,因为其操作的性能要比普通的操作高,可以了解相关文章,那么ConcurrentHashMap并发性能到底怎么样呢?做了一些简单的性能测试, ConcurrentHashMap和HashTable:

        5个线程,插入100w对象,ConcurrentHashMap性能高于HashTable, 而且会随着线程数和数据量增加,性能差会更大。

不吝指正。

展开阅读全文
加载中
点击加入讨论🔥(1) 发布并加入讨论🔥
1 评论
27 收藏
3
分享
返回顶部
顶部