你安装的是1.6.5吗？一般安装需要有Administrator 权限。安装后可以在 Services 中查看 Membase Server 是否正常运行。1.6.4 的 Windows 32bit 曾经有过问题，但已经更新。
据我所知，Membase 一半以上的下载是来自 Windows 版本。
c:\program files\membase\server\bin\ep_engine\management\stats localhost:11210 all
11210 端口 直接连向 Membase server ，不需要 moxi.
Jason Sirota is a Director, Application Architecture for The Knot, Inc.
Cache Architecture - Part I - Choosing a Provider for The Knot, Inc.
The Knot Inc. is the premier media company devoted to weddings, pregnancy and everything in between, providing young women with the trusted information, products and advice they need to guide them through the most transformative events of their lives. Our family of premium brands began with the industry's #1 wedding brand, The Knot, and has grown to include WeddingChannel.com, The Nest and The Bump. With groundbreaking community platforms and incomparable content, our cache needs vary from product to product. We were in need of a centralized caching solution that could handle both the width and depth of our unique product offering.
At monthly pageviews in the hundreds of millions and a vast majority of those being database-driven, caching is a critical feature to both performance and scale of our applications.
Our Caching Legacy
We used various iterations of caching over the years of developing web applications, our process would generally go like this:
view plaincopy to clipboardprint?
10. Build an application with no cache; let it hum along for a while
20. Realize the application was getting kind of slow
30. Build customized caching into the app; let it hum along for a while
40. Realize the application was getting kind of slow again
50. Rebuild the app on a newer technology stack
60. Go to 10
Sometime in the last 2-3 years, we realized this was probably OK while we were small but as we got bigger, we needed to have a more formal cache solution: enter memcached.
The Naked Memcached Years
At first, we just started experimenting with naked memcached instances. Someone sat down with a systems engineer one day, created a CentOS instance with memcached and opened port 11211 on some machine.
We then plugged an app into the ip:port combo using the Enyim client and we were running memcached. We were using the same instance in dev, qa, staging and production and it had a whopping 1gb of space.
We figured out pretty quickly this wasn't going to suit us very well in the long term, so we provisioned a box and created multiple port-instances for a variety of our apps at 1gb a piece (1 node for each app, 1gb per node)
This solution was OK for a while but we had a number of challenges.
· Problem 1. Naked memcached provides no OTB instrumentation GUI so we had to custom-develop monitoring and stats
· Problem 2. We were running on 1 node so if the node went down our entire cache solution failed (SPOF) and our apps took a major performance hit
· Problem 3. The amount of space we had provisioned was woefully inadequate for our needs
As a solution to these problems we implemented a proprietary caching solution from Gear6.
Gear6 had three notable improvements to naked memcached that made it attractive for us to implement:
· Solution to Problem 1. The OTB instrumentation, administrator, monitoring and statistics were top-notch.
· Solution to Problem 2. Gear6 offered node-replication to provide HA failover for Cache instances
· Solution to Problem 3. The base-level Gear6 boxes combined RAM and SSD to provide 96gb of replicated space, 6 times what we were running with naked memcached.
We purchased the gear6 instance, got it set up and running. Then we spent a few months migrating our configs (see part 3).
Then we got the news: Gear6 files for chapter 11.
After a long saga of trying to reach the new owners, we finally had a call in mid-October where Violin broke the news that they would no longer be supporting Gear6 and we should (I am paraphrasing) get off of it ASAP.
That day, one of the SSD drives in the Gear6 hardware failed and we could not recover the second box in the Gear6 cluster.
At this point, our Gear6 boxes were a ticking time-bomb and it was time for a new solution.
Since we were changing anyway...
We wanted to make sure that we satisfied not only addressed the original problems we had with memcached but that we also implemented a solution that solved some additional issues besides the original 3:
· Problem 4. The cache should be distributed over at least 5 physical nodes - if a node failed, only 20% (or less) of the cache was affected
· Problem 5. The nodes are organized so that development teams each had a bucket that they could use as they needed, without having to get more space "approved"
· Problem 6. New hardware can be added both vertically (by added more ram to the existing boxes) or horizontally (by adding more boxes)
· Problem 7. The company does not go out of business.
Talking with Zynga
I had seen a presentation about Facebook using memcached and was convinced that in order to both insulate ourselves from proprietary software and to scale that we should go back to using naked memcached with open-source instrumentation. This would likely have included phpMemcacheAdmin for stats and Nagios for alerts and monitoring.
Even with a naked memcached solution though, the administration of memcached becomes somewhat of a manual process: using different tools for different jobs rather than a centralized tool.
I spoke to Allan Leinwand at Zynga about our issues with Gear6 since he was familiar with Violin and he suggested that we speak to Membase. In talking to Membase and through our own research, we found that Membase solved all of our original problems, plus our new problems with Gear6.
1. Membase provides a rich set of both GUI and programmatic tools to manage and monitor the cache.
2. Membase not only runs on multiple physical nodes but balances keys across those nodes using the vBuckets
3. Membase runs on Windows and can handle quite a bit more capacity (evidenced by Zynga) than we could possibly use.
4. Membase uses both HA replication and distributed nodes for different solutions, in our case, it easily supports the 5 node-configuration
5. Membase provides Buckets that can be configured by Port to allow different teams to have a set amount of space
6. Hardware can be added both horizontally and vertically to a Membase cluster. However, one limitation is that all nodes have to run the same cache limit so you do need to think carefully about your node size
7. No company is immune to going under but, in addition to their strong financial state, the risk for Membase is mitigated by two factors:
First, unlike Gear6, the Membase code is open-source so if they did go out of business it could still be maintained by the Community (or by ourselves if we wanted to learn Erlang).
Second, the Membase interface IS memcached so if we did need to switch back to naked memcached, it would simply be a matter of uninstalling Membase and installing Memcached, no configuration change needed.
Membase IS memcached
An underlying assumption to this entire process was that Membase runs just fine using standard Memcached clients. Considering that all of our apps are built to use standalone memcached clients, migrating to a solution that used a different client-architecture would have been prohibitive.
At the speed in which we were considering changing, we needed something that required almost 0 client changes, which somewhat limited us to memcached or another after-market provider like Gear6.
Our choice, in the end, was between Membase and naked Memcached rather than other proprietary cache providers. We found that the low-cost of Membase and the benefits it provides outweighed any risks of going with a proprietary solution.
In Part II, I'll discuss the implementation itself:
- Hardware configuration
- Windows vs. Linux
- Membase vs. Memcached Buckets
- Environments & Bucket Configuration
In Part III, our application stack and next steps:
- Application stack
- Memcached Clients
- Cache Migration Process
- Future Plans