7 items tagged “amazon-web-services”
2011
The excess capacity story is a myth. It was never a matter of selling excess capacity, actually within 2 months after launch AWS would have already burned through the excess Amazon.com capacity. Amazon Web Services was always considered a business by itself, with the expectation that it could even grow as big as the Amazon.com retail operation.
2009
aws—simple access to Amazon EC2 and S3. The best command line client I’ve found for EC2 and S3. “aws put --progress my-bucket-name/large-file.tar.gz large-file.tar.gz” is particularly useful for uploading large files to S3. Written in Perl (with no dependencies), shelling out to curl to do the heavy lifting.
Finding similar items with Amazon Elastic MapReduce, Python, and Hadoop streaming. Tutorial for running Hadoop jobs on Elastic MapReduce using Python and the 2005 Audioscrobbler dataset.
Amazon Elastic MapReduce (via) Hadoop as a service. Basically a web based GUI around Hadoop—you could roll this yourself on EC2 but for a small markup on regular EC2 prices you get to avoid the extra work setting everything up. Data processing scripts can be written in Java, Ruby, Perl, Python, PHP, R, or C++ and are loaded in to S3 before firing off the job.
2008
How Tarsnap uses Amazon Web Services (via) Useful case study, including some thoughts on SimpleDB.
Amazon SimpleDB a complete flop? Terry asks if anyone is actually using SimpleDB (related Google searches indicate not, and I’ve personally not heard of anyone using it despite plenty of usage of S3 and EC2). One factor might be that lock-in to EC2 and S3 is pretty small, but if you rely on SimpleDB you’ll need to rewrite your entire application to escape.
backup_to_s3.py. I wrote Yet Another S3 backup script today. It’s a thin wrapper about boto that doesn’t do anything particularly impressive, but it fits my brain.