文档章节

LevelDB的算分逻辑

强子大叔的码田
 强子大叔的码田
发布于 2017/07/23 11:07
字数 882
阅读 152
收藏 0

「深度学习福利」大神带你进阶工程师,立即查看>>>

每次1个新文件,先算放到哪层上,这是根据是否有key的overlap来决定的,这个逻辑见

stop in org.iq80.leveldb.impl.Version.pickLevelForMemTableOutput

int pickLevelForMemTableOutput(Slice smallestUserKey, Slice largestUserKey) {
		System.out.println("" + new String(smallestUserKey.getBytes()));
		System.out.println("" + new String(largestUserKey.getBytes()));
		// ================岁月静好,现世安稳===================
		int level = 0;
		// ================岁月静好,现世安稳===================
		// ================岁月静好,现世安稳===================
		// ================岁月静好,现世安稳===================
		if (!overlapInLevel(0, smallestUserKey, largestUserKey)) {
			// ================岁月静好,现世安稳===================
			// Push to next level if there is no overlap in next level,
			// and the #bytes overlapping in the level after that are limited.
			// ================岁月静好,现世安稳===================
			InternalKey start = new InternalKey(smallestUserKey, MAX_SEQUENCE_NUMBER, ValueType.VALUE);
			InternalKey limit = new InternalKey(largestUserKey, 0, ValueType.VALUE);
			// ================岁月静好,现世安稳===================
			while (level < MAX_MEM_COMPACT_LEVEL) {
				// ================岁月静好,现世安稳===================
				if (overlapInLevel(level + 1, smallestUserKey, largestUserKey)) {
					break;
				}
				// ================岁月静好,现世安稳===================
				long sum = Compaction.totalFileSize(versionSet.getOverlappingInputs(level + 2, start, limit));
				if (sum > MAX_GRAND_PARENT_OVERLAP_BYTES) {
					break;
				}
				// ================岁月静好,现世安稳===================
				level++;
			}
		}
		return level;
		// ================岁月静好,现世安稳===================
	}

 

选好了层之后,需要apply一下,这个见

stop in  org.iq80.leveldb.impl.VersionSet$Builder.apply --- 选好level,放到对应的层级上

/**
		 * Apply the specified edit to the current state.
		 */
		public void apply(VersionEdit edit) {
			// 看到这里了
			// Update compaction pointers
			for (Entry<Integer, InternalKey> entry : edit.getCompactPointers().entrySet()) {
				Integer level = entry.getKey();
				InternalKey internalKey = entry.getValue();
				versionSet.compactPointers.put(level, internalKey);
			}
			// 看到这里了=======================
			// Delete files
			for (Entry<Integer, Long> entry : edit.getDeletedFiles().entries()) {
				Integer level = entry.getKey();
				Long fileNumber = entry.getValue();
				levels.get(level).deletedFiles.add(fileNumber);
				// todo missing update to addedFiles?
			}
			// 看到这里了=======================
			// Add new files
			for (Entry<Integer, FileMetaData> entry : edit.getNewFiles().entries()) {
				Integer level = entry.getKey();
				FileMetaData fileMetaData = entry.getValue();

				// We arrange to automatically compact this file after
				// a certain number of seeks. Let's assume:
				// (1) One seek costs 10ms
				// (2) Writing or reading 1MB costs 10ms (100MB/s)
				// (3) A compaction of 1MB does 25MB of IO:
				// 1MB read from this level
				// 10-12MB read from next level (boundaries may be misaligned)
				// 10-12MB written to next level
				// This implies that 25 seeks cost the same as the compaction
				// of 1MB of data. I.e., one seek costs approximately the
				// same as the compaction of 40KB of data. We are a little
				// conservative and allow approximately one seek for every 16KB
				// of data before triggering a compaction.
				int allowedSeeks = (int) (fileMetaData.getFileSize() / 16384);
				if (allowedSeeks < 100) {
					allowedSeeks = 100;
				}
				fileMetaData.setAllowedSeeks(allowedSeeks);

				levels.get(level).deletedFiles.remove(fileMetaData.getNumber());
				levels.get(level).addedFiles.add(fileMetaData);
			}
			// 看到这里了=======================
		}

 

 

接下来开始算分

stop in  org.iq80.leveldb.impl.VersionSet.finalizeVersion  --- 算分

private void finalizeVersion(Version version) {
		{
			printFile(version);
		}
		// Precomputed best level for next compaction
		// ==================================================================
		int bestLevel = -1;
		double bestScore = -1;
		// ==================================================================
		for (int level = 0; level < version.numberOfLevels() - 1; level++) {
			double score;
			if (level == 0) {
				// We treat level-0 specially by bounding the number of files
				// instead of number of bytes for two reasons:
				//
				// (1) With larger write-buffer sizes, it is nice not to do too
				// many level-0 compactions.
				//
				// (2) The files in level-0 are merged on every read and
				// therefore we wish to avoid too many files when the individual
				// file size is small (perhaps because of a small write-buffer
				// setting, or very high compression ratios, or lots of
				// overwrites/deletions).
				// ==================================================================
				score = 1.0 * version.numberOfFilesInLevel(level) / L0_COMPACTION_TRIGGER;
			} else {
				// Compute the ratio of current size to size limit.
				long levelBytes = 0;
				for (FileMetaData fileMetaData : version.getFiles(level)) {
					levelBytes += fileMetaData.getFileSize();
				}
				score = 1.0 * levelBytes / maxBytesForLevel(level);
			}

			if (score > bestScore) {
				bestLevel = level;
				bestScore = score;
			}
		}
		// 看到这里了
		version.setCompactionLevel(bestLevel);
		version.setCompactionScore(bestScore);
		// ==================================================================
	}

 

算完分后,开始选择是否需要compaction

stop in  org.iq80.leveldb.impl.VersionSet.pickCompaction ---根据分数来决定是否要compaction

public Compaction pickCompaction() {
		// ================岁月静好,现世安稳===================
		// We prefer compactions triggered by too much data in a level over
		// the compactions triggered by seeks.
		// ================岁月静好,现世安稳===================
		boolean sizeCompaction = (current.getCompactionScore() >= 1);
		boolean seekCompaction = (current.getFileToCompact() != null);

		int level;
		List<FileMetaData> levelInputs;
		// ================岁月静好,现世安稳===================
		if (sizeCompaction) {
			level = current.getCompactionLevel();
			// ================岁月静好,现世安稳===================
			Preconditions.checkState(level >= 0);
			Preconditions.checkState(level + 1 < NUM_LEVELS);
			// ================岁月静好,现世安稳===================
			// Pick the first file that comes after compact_pointer_[level]
			levelInputs = newArrayList();
			// ================岁月静好,现世安稳===================
			for (FileMetaData fileMetaData : current.getFiles(level)) {
				if (!compactPointers.containsKey(level)
						|| internalKeyComparator.compare(fileMetaData.getLargest(), compactPointers.get(level)) > 0) {
					levelInputs.add(fileMetaData);
					break;
				}
			}
			if (levelInputs.isEmpty()) {
				// Wrap-around to the beginning of the key space
				levelInputs.add(current.getFiles(level).get(0));
			}
		}
		// ================岁月静好,现世安稳===================
		else if (seekCompaction) {
			level = current.getFileToCompactLevel();
			levelInputs = ImmutableList.of(current.getFileToCompact());
		} else {
			return null;
		}

		// Files in level 0 may overlap each other, so pick up all overlapping
		// ones
		if (level == 0) {
			Entry<InternalKey, InternalKey> range = getRange(levelInputs);
			// Note that the next call will discard the file we placed in
			// c->inputs_[0] earlier and replace it with an overlapping set
			// which will include the picked file.
			levelInputs = getOverlappingInputs(0, range.getKey(), range.getValue());

			Preconditions.checkState(!levelInputs.isEmpty());
		}

		Compaction compaction = setupOtherInputs(level, levelInputs);
		return compaction;
	}

 

强子大叔的码田

强子大叔的码田

粉丝 925
博文 1660
码字总数 1294241
作品 9
杭州
架构师
私信 提问
加载中
请先登录后再评论。
访问安全控制解决方案

本文是《轻量级 Java Web 框架架构设计》的系列博文。 今天想和大家简单的分享一下,在 Smart 中是如何做到访问安全控制的。也就是说,当没有登录或 Session 过期时所做的操作,会自动退回到...

黄勇
2013/11/03
3.6K
8
轻量级数据存储服务--LLServer

LLServer是本人基于libevent和leveldb这两个开源软件,开发的轻量级数据存储服务器软件,借助libevent高效网络接口实现对leveldb的访问封装。 其支持http协议和memcached协议。也就是可以通过...

代震军
2012/11/06
1K
0
高性能异步网络服务框架--libgod

libGod是一个全异步+协程机制实现的网络库,适用于windows、linux、bsd等多种平台。内部使用IOCP、epoll、kqueue等系统调用管理事件机制,同时巧妙的利用协程,将复杂的异步逻辑转换为同步,...

libGod
2012/11/09
6.8K
6
NoSQL 数据服务器--Reveldb

reveldb 一个基于 google leveldb 的 NoSQL 数据服务器,网络连接采用了 libevent 的 HTTP 接口,因此 reveldb 天生就适合处理 HTTP 请求。但更确切地说,reveldb 并没有直接采用 libevent 的...

大卷卷
2013/01/04
1.3K
0
Java模板引擎--Snippetory

Snippetory是一个通用的Java模板引擎基于被动模板。在被动的模板,模板代码和逻辑显然是分开。模板包含了非常简单的标记。通过删除逻辑,模板可以直接访问,完全参数化,和免费的上下文,可以很容...

匿名
2013/05/13
612
0

没有更多内容

加载失败,请刷新页面

加载更多

红队之windows用户和组

目录 0x01 用户账户和组策略 0x02 Windows中的访问控制 0x03 安全标识符SID 0x04 用户账户控制(UAC) 用户帐户 用户帐户是对计算机用户身份的标识,本地用户帐户、密码存在本地计算机上,只...

黑白天安全团队
昨天
9
0
厉害了!百度智能云NIRO Pro智能机器人半年内连获三项产品设计大奖

短短半年内,百度智能云NIRO Pro智能机器人连获三项产品设计大奖,其中包括有“设计界奥斯卡”之称的德国红点奖,成功引领了全球助理机器人的工业设计和发展趋势风向标。红点奖评委这样评价,...

百度智能云
2019/12/04
0
0
StringBuider 在什么条件下、如何使用效率更高?

作者:后青春期的Keats cnblogs.com/keatsCoder/p/13212289.html 引言 都说 StringBuilder 在处理字符串拼接上效率要强于 String,但有时候我们的理解可能会存在一定的偏差。最近我在测试数据...

Object_Man
今天
0
0
发布更新|腾讯云 Serverless 产品动态 20200813

一、云函数 SCF + Ckafka 联合转储方案正式发布 发布时间: 2020-08-06 产品背景: SCF + Ckafka 联合转储方案可以帮忙用户节省使用与开发成本,用户可以将 Ckafka 消息转储同步转储至消息队...

腾讯云Serverless
32分钟前
5
0
如何正确强制执行Git推送? - How do I properly force a Git push?

问题: I've set up a remote non-bare "main" repo and cloned it to my computer. 我已经建立了一个远程的非裸露的“主”仓库,并将其克隆到我的计算机上。 I made some local changes, u......

javail
33分钟前
14
0

没有更多内容

加载失败,请刷新页面

加载更多

返回顶部
顶部