加载中
You can manually run service_register.bat and service_start.bat under <installdir>\bin directory. Check if there are any errors when running those bat files.  Maybe you have security software preinstalled on the machine. 
@高磊
手里数台服务器,无论2008、2003,都是32bit安装后基本都不能用 只要连接上11211,打下stats,都...
BigLazybone 2011/06/10 00:05 回答了问题: Membase集群问题
这是先前版本里的一个bug.  版本1.7 在集群稳定性,数据损失方面,下了很大功夫。可以解决你的问题。
@吴兴华
我现在在二台机器上都安装了MembaseServer,拿其中一台机做主服务器将另一台服务器加进来做集群,加倒是能加...
Membase + CouchOne = Membase 速度 + CouchDb 索引 and MapReduce
@高磊
最近看到membase上在跟couchdb做结合,不过鉴于mongodb的方便和存储吞吐量,打算还是用他了 读取的...

你安装的是1.6.5吗?一般安装需要有Administrator 权限。安装后可以在 Services 中查看 Membase Server 是否正常运行。1.6.4 的 Windows 32bit 曾经有过问题,但已经更新。

据我所知,Membase 一半以上的下载是来自 Windows 版本。

 你可以参考 http://forums.membase.org/thread/servererror-proxy-write-downstream

 另外,你可以试试这个命令

c:\program files\membase\server\bin\ep_engine\management\stats localhost:11210 all

11210 端口 直接连向 Membase server ,不需要 moxi.

@高磊
手里数台服务器,无论2008、2003,都是32bit安装后基本都不能用 只要连接上11211,打下stats,都...

什么版本 ? 1.6.5应该没问题。还有,看看 memcached 进程是否在运行

@高磊
手里数台服务器,无论2008、2003,都是32bit安装后基本都不能用 只要连接上11211,打下stats,都...

Jason Sirota is a Director, Application Architecture for The Knot, Inc.

Cache Architecture - Part I - Choosing a Provider for The Knot, Inc.

Introduction

The Knot Inc. is the premier media company devoted to weddings, pregnancy and everything in between, providing young women with the trusted information, products and advice they need to guide them through the most transformative events of their lives. Our family of premium brands began with the industry's #1 wedding brand, The Knot, and has grown to include WeddingChannel.com, The Nest and The Bump. With groundbreaking community platforms and incomparable content, our cache needs vary from product to product. We were in need of a centralized caching solution that could handle both the width and depth of our unique product offering.

At monthly pageviews in the hundreds of millions and a vast majority of those being database-driven, caching is a critical feature to both performance and scale of our applications.

Our Caching Legacy

We used various iterations of caching over the years of developing web applications, our process would generally go like this:

view plaincopy to clipboardprint?

10. Build an application with no cache; let it hum along for a while
20. Realize the application was getting kind of slow
30. Build customized caching into the app; let it hum along for a while
40. Realize the application was getting kind of slow again
50. Rebuild the app on a newer technology stack
60. Go to 10


Sometime in the last 2-3 years, we realized this was probably OK while we were small but as we got bigger, we needed to have a more formal cache solution: enter memcached.

The Naked Memcached Years

At first, we just started experimenting with naked memcached instances. Someone sat down with a systems engineer one day, created a CentOS instance with memcached and opened port 11211 on some machine.

We then plugged an app into the ip:port combo using the Enyim client and we were running memcached. We were using the same instance in dev, qa, staging and production and it had a whopping 1gb of space.

We figured out pretty quickly this wasn't going to suit us very well in the long term, so we provisioned a box and created multiple port-instances for a variety of our apps at 1gb a piece (1 node for each app, 1gb per node)




This solution was OK for a while but we had a number of challenges.

· Problem 1. Naked memcached provides no OTB instrumentation GUI so we had to custom-develop monitoring and stats

· Problem 2. We were running on 1 node so if the node went down our entire cache solution failed (SPOF) and our apps took a major performance hit

· Problem 3. The amount of space we had provisioned was woefully inadequate for our needs

Gear6

As a solution to these problems we implemented a proprietary caching solution from Gear6.

Gear6 had three notable improvements to naked memcached that made it attractive for us to implement:

· Solution to Problem 1. The OTB instrumentation, administrator, monitoring and statistics were top-notch.

· Solution to Problem 2. Gear6 offered node-replication to provide HA failover for Cache instances

· Solution to Problem 3. The base-level Gear6 boxes combined RAM and SSD to provide 96gb of replicated space, 6 times what we were running with naked memcached.




We purchased the gear6 instance, got it set up and running. Then we spent a few months migrating our configs (see part 3).

Then we got the news: Gear6 files for chapter 11.

After a long saga of trying to reach the new owners, we finally had a call in mid-October where Violin broke the news that they would no longer be supporting Gear6 and we should (I am paraphrasing) get off of it ASAP.

That day, one of the SSD drives in the Gear6 hardware failed and we could not recover the second box in the Gear6 cluster.

At this point, our Gear6 boxes were a ticking time-bomb and it was time for a new solution.

Since we were changing anyway...

We wanted to make sure that we satisfied not only addressed the original problems we had with memcached but that we also implemented a solution that solved some additional issues besides the original 3:

· Problem 4. The cache should be distributed over at least 5 physical nodes - if a node failed, only 20% (or less) of the cache was affected

· Problem 5. The nodes are organized so that development teams each had a bucket that they could use as they needed, without having to get more space "approved"

· Problem 6. New hardware can be added both vertically (by added more ram to the existing boxes) or horizontally (by adding more boxes)

· Problem 7. The company does not go out of business.

Talking with Zynga

I had seen a presentation about Facebook using memcached and was convinced that in order to both insulate ourselves from proprietary software and to scale that we should go back to using naked memcached with open-source instrumentation. This would likely have included phpMemcacheAdmin for stats and Nagios for alerts and monitoring.

Even with a naked memcached solution though, the administration of memcached becomes somewhat of a manual process: using different tools for different jobs rather than a centralized tool.

I spoke to Allan Leinwand at Zynga about our issues with Gear6 since he was familiar with Violin and he suggested that we speak to Membase. In talking to Membase and through our own research, we found that Membase solved all of our original problems, plus our new problems with Gear6.

1. Membase provides a rich set of both GUI and programmatic tools to manage and monitor the cache.

2. Membase not only runs on multiple physical nodes but balances keys across those nodes using the vBuckets

3. Membase runs on Windows and can handle quite a bit more capacity (evidenced by Zynga) than we could possibly use.

4. Membase uses both HA replication and distributed nodes for different solutions, in our case, it easily supports the 5 node-configuration

5. Membase provides Buckets that can be configured by Port to allow different teams to have a set amount of space

6. Hardware can be added both horizontally and vertically to a Membase cluster. However, one limitation is that all nodes have to run the same cache limit so you do need to think carefully about your node size

7. No company is immune to going under but, in addition to their strong financial state, the risk for Membase is mitigated by two factors:

First, unlike Gear6, the Membase code is open-source so if they did go out of business it could still be maintained by the Community (or by ourselves if we wanted to learn Erlang).

Second, the Membase interface IS memcached so if we did need to switch back to naked memcached, it would simply be a matter of uninstalling Membase and installing Memcached, no configuration change needed.

Membase IS memcached

An underlying assumption to this entire process was that Membase runs just fine using standard Memcached clients. Considering that all of our apps are built to use standalone memcached clients, migrating to a solution that used a different client-architecture would have been prohibitive.

At the speed in which we were considering changing, we needed something that required almost 0 client changes, which somewhat limited us to memcached or another after-market provider like Gear6.

Our choice, in the end, was between Membase and naked Memcached rather than other proprietary cache providers. We found that the low-cost of Membase and the benefits it provides outweighed any risks of going with a proprietary solution.

In Part II, I'll discuss the implementation itself:

- Hardware configuration
- Windows vs. Linux
- Membase vs. Memcached Buckets
- Environments & Bucket Configuration
- Instrumentation

In Part III, our application stack and next steps:

- Application stack
- Memcached Clients
- Cache Migration Process
- Future Plans

@红薯
Membase 是 NoSQL 家族的一个新的重量级的成员,支持Windows和Linux系统。 Membase容...

11211 是传统memcached端口。需使用MOXI进程。11210是默认MEMBASE 端口。

@红薯
Membase 是 NoSQL 家族的一个新的重量级的成员,支持Windows和Linux系统。 Membase容...
BigLazybone 2011/01/07 09:04 回答了问题: 关于membase  Bucket Type

Memcached 是旧瓶, Membase是新酒。 度数不一样。

@小木桶
请教一下。 在创建membase 时,Bucket Type有两个一个是:Memcached, Membase 他...

10楼,这里有个博客 讲述他们从memcached 到 membase 的使用经历,很有意思。

http://jasonsirota.com/

@红薯
Membase 是 NoSQL 家族的一个新的重量级的成员,支持Windows和Linux系统。 Membase容...
BigLazybone 2011/01/07 08:52 回答了问题: 紧急求助Membase!!!!!

顶,楼上的说这两个工具我都在用。。。

@ajaxajaxandasp.net
我昨天下载并且安装了Membase, 并且下载了客户端Membase.2.10https://github.com...
BigLazybone 2011/01/07 08:33 回答了问题: 求指点!membase使用问题

去下载最新的1.6.4.1 版本,应该没有问题。

http://www.membase.org/downloads

@五月秋风
最近在学习如果使用membase,去官网也看过了,还是有很多地方不懂。在ubuntu上使用时遇到些问题。想请教各位...

没有更多内容

加载失败,请刷新页面

返回顶部
顶部