The unfortunate necessary deed has been done: Update the site to be fully static (so s3 can serve it for me). Ensure the latest backups are on s3. Add dns entries to route53. Cut over the SOA record. Refresh the browser. Delete the linode instance (which never gave me a lick of trouble). Pour one out :/ One thing I did not expect was for Pingdom to have a complete hissy fit.
Today I finally started work towards shutting down my Linode instance. I have loved my time with Linode, but realistically I’m not learning anything there anymore. I’ve grown older, the things I’m learning are different… and I’d just rather put my stuff up on a cdn and not even have a server to deal with. So today I start the process of shutting off all the crazy stuff I’ve built up over the years.
Sunrise was at 06:16 with low tide at 04:43 and a first quarter moon with a moonrise of 14:53 and moonset of 23:54. Skies were very clear and started fishing at 05:45. Fished just south of the peer (a good fisherman could have easily casted to the bridge peer itself) on the east side with an incoming tide. Initially fished with a green clouser minnow casted downstream (south) and had an immediate strike on the first cast.
Ever had a situation where you want to send a file to a friend at work or something, and sending it over email makes you feel all dirty? One way to solve the problem is to copy the file into /var/www/foo or something and send a link… we’ve all done it. But there’s a better way :) Python can be used for web programming, and the language has builtin code for reference implementations of stuff.
Chula comes with a guid generator class who’s implementation is a bit naive. Considering Python already has (probably much better) functionality builtin, I figured I’d see how fast they are. Here’s the way I tested them: # Python imports from random import randrange from uuid import uuid1, uuid4 import time # Chula imports from chula.guid import guid count = 10000 def timeit(fcn): def wrapper(): start = time.time() fcn() print('%s %fs' % (fcn.
Recently I was in the process of moving my site to a much better hosting situation (more on that later). During the move I decided to upgrade from PostgreSQL-8.0 to PostgreSQL-8.3 as I was pretty far behind and I prefer to stay current. This sort of upgrade isn’t a big deal, and I’ve done it many times. So I did my usual process: Install the desired version of PostgreSQL (in this case 8.
One of my goals in moving to California was to put more focus on getting healthy. Now I’m lucky enough to be able to ride a really nice bike to work each day and get in a little work out each way. The bike comes pre-configured as fixed (aka you can’t coast) and the rear hub is a “flip flop” which means you can easily convert it into a single speed bike.
Git is by far my favorite version control system I’ve used. I use it all the time, and one of it’s benefits is how easy it is to share code. I usually just send someone the link to gitweb and they can look at my code there. Other times people give me their public key, and they can clone my stuff over ssh. But just recently I wanted to have a way to easily allow anonymous readonly access.
Seriously, I’ve had it. Perforce is the most horrible version control system ever. It’s not as bad as Microsoft Source Safe - but no one pays for that anymore so it doesn’t count.
When it comes to performance one of the most important considerations is caching of content. There are all sorts of approaches to the caching. Some protect the database from duplicate queries while others protect your application from having to perform an expensive algorithm over and over. Today I am going to talk about the most aggressive form of content caching when it comes to the web - full page caching.