New Website and Project Updates
Guilty as charged. I enjoy changing my websites and playing with different technologies more than writing actual content. Things have been very busy, and will be busier yet. In brief, here are some things that need their very own write-up.
Website, Powered By Hugo
I love Python’s RestructuredText markup language which is what I used for my Pelican based website. I, however, was less enthused when none of the themes had any support for RestructuredText’s more “advanced” features. Or anything beyond what Markdown can do. Nor did I want to dig into the Sass to do more in depth work on the theme.
The last 9 months or so I’ve been very enthralled by Go. Simplicity and efficiency make it a winning choice when working and larger scales. I also encountered Hugo and was very interested in the power and flexibility it had for maintaining a website. This led me to re-design the website with Hugo 0.13 and Bootstrap 3.3.2. Its also completely hosted on AWS S3. The only negative I have so far is that I’ve lost my IPv6 presence.
Git repositories once hosted at
http://linuxczar.net/git/ now live
in my repositories at GitHub. At least, the still relevant ones.
StatsRelay, my first real Go project has been remarkably stable and efficient. With it I’m able to handle more than 350,000 packets/metrics per second to my Statsd service. In testing, I’ve been up toward 800,000 packets per second. I haven’t even rebuilt it with Go 1.4.
How do you backup large Graphite clusters? I know folks that run a secondary cluster to mirror data to. That would have been incredibly expensive for me. So why not use OpenStack Swift or Amazon S3? Compression, retention, high speed, locking, and other fine features. Storage format allows for manual restores in an emergency. Check out Whisper-Backup.
Carbontools is just an idea and some bad code right now and probably not its
final name either. The biggest problem I have with my Graphite cluster is
manipulating data in a sane amount of time. The Python implementation of
whisper-fill gets really slow when you need to operate of a few million WSP
- Can I make a
whisper-fillthat’s an order of magnitude faster?
- In a rebalance or expansion routine I want a near-atomic method of moving a WSP file. Faster, and decrease query-strangeness that happens in those operations.
- Perform basic metric manipulations: tar archives, deletes, restores, build the WebUI search index, etc…across large consistent hashing clusters.
In Go, of course.
Today I’m doing these with some Fabric tasks. I’ve far exceeded what Fabric can really do, and the Python/SSH/Python setup at my scale is quite slow.
My wife and I expect a baby girl very soon. Very soon. Surely that will add exacting blog posts. Surely.