Categories
Tutorials

How to autosave camera uploads to Google Drive

I’m a huge fan of Google Drive and Google Photos, and am in the process of moving all my photos over from Dropbox. One feature I miss from Dropbox is that it can capture screenshots and photos and automatically place them the cloud for you.

Thankfully, it’s super easy to configure this for Drive on a Mac as well.

Saving screenshots to Drive automatically

  1. Create a new folder called “Screenshots” in your Google Drive.
  2. Open Terminal.
  3. Run the following commands:defaults write com.apple.screencapture location ~/Google\ Drive/Screenshots/ killall SystemUIServer

Now screenshots will be saved to the Screenshots folder in your Google Drive, not the desktop.

Saving camera uploads to Drive automatically

  1. Create a new folder called “Camera Uploads” in your Google Drive.
  2. Connect your camera
  3. Open the “Image Capture” app
  4. Select your camera on the left
  5. At the bottom of the devices list on the left, you may see “Connecting this camera opens:”. If not, tap the little up arrow on the bottom left of the screen.
  6. Select “AutoImporter” from the list of applications for “Connecting this camera opens:”.
  7. Change the “Import To:” folder to your Camera Uploads folder in Google Drive.

Categories
Tutorials

My First 5 Minutes On A Server; Or, Essential Security for Linux Servers

Server security doesn’t need to be complicated. My security philosophy is simple: adopt principles that will protect you from the most frequent attack vectors, while keeping administration efficient enough that you won’t develop “security cruft”. If you use your first 5 minutes on a server wisely, I believe you can do that.

Any seasoned sysadmin can tell you that as you grow and add more servers & developers, user administration inevitably becomes a burden. Maintaining conventional access grants in the environment of a fast growing startup is an uphill battle – you’re bound to end up with stale passwords, abandoned intern accounts, and a myriad of “I have sudo access to Server A, but not Server B” issues. There are account sync tools to help mitigate this pain, but IMHO the incremental benefit isn’t worth the time nor the security downsides. Simplicity is the heart of good security.

Our servers are configured with two accounts: root and deploy. The deploy user has sudo access via an arbitrarily long password and is the account that developers log into. Developers log in with their public keys, not passwords, so administration is as simple as keeping the authorized_keys file up-to-date across servers. Root login over ssh is disabled, and the deploy user can only log in from our office IP block.

The downside to our approach is that if an authorized_keys file gets clobbered or mis-permissioned, I need to log into the remote terminal to fix it (Linode offers something called Lish, which runs in the browser). If you take appropriate caution, you shouldn’t need to do this.

Note: I’m not advocating this as the most secure approach – just that it balances security and management simplicity for our small team. From my experience, most security breaches are caused either by insufficient security procedures or sufficient procedures poorly maintained.

Let’s Get Started

Our box is freshly hatched, virgin pixels at the prompt. I favor Ubuntu; if you use another version of linux, your commands may vary. Five minutes to go:

passwd

Change the root password to something long and complex. You won’t need to remember it, just store it somewhere secure – this password will only be needed if you lose the ability to log in over ssh or lose your sudo password.

apt-get update
apt-get upgrade

The above gets us started on the right foot.

Install Fail2ban

apt-get install fail2ban

Fail2ban is a daemon that monitors login attempts to a server and blocks suspicious activity as it occurs. It’s well configured out of the box.

Now, let’s set up your login user. Feel free to name the user something besides ‘deploy’, it’s just a convention we use:

useradd deploy
mkdir /home/deploy
mkdir /home/deploy/.ssh
chmod 700 /home/deploy/.ssh

Require public key authentication

The days of passwords are over. You’ll enhance security and ease of use in one fell swoop by ditching those passwords and employing public key authentication for your user accounts.

vim /home/deploy/.ssh/authorized_keys

Add the contents of the id_rsa.pub on your local machine and any other public keys that you want to have access to this server to this file.

chmod 400 /home/deploy/.ssh/authorized_keys
chown deploy:deploy /home/deploy -R

Test The New User & Enable Sudo

Now test your new account logging into your new server with the deploy user (keep the terminal window with the root login open). If you’re successful, switch back to the terminal with the root user active and set a sudo password for your login user:

passwd deploy

Set a complex password – you can either store it somewhere secure or make it something memorable to the team. This is the password you’ll use to sudo.

visudo

Comment all existing user/group grant lines and add:

root    ALL=(ALL) ALL
deploy  ALL=(ALL) ALL

The above grants sudo access to the deploy user when they enter the proper password.

Lock Down SSH

Configure ssh to prevent password & root logins and lock ssh to particular IPs:

vim /etc/ssh/sshd_config

Add these lines to the file, inserting the ip address from where you will be connecting:

PermitRootLogin no
PasswordAuthentication no
AllowUsers deploy@(your-ip) deploy@(another-ip-if-any)

Now restart ssh:

service ssh restart

Set Up A Firewall

No secure server is complete without a firewall. Ubuntu provides ufw, which makes firewall management easy. Run:

ufw allow from {your-ip} to any port 22
ufw allow 80
ufw allow 443
ufw enable

This sets up a basic firewall and configures the server to accept traffic over port 80 and 443. You may wish to add more ports depending on what your server is going to do.

Enable Automatic Security Updates

I’ve gotten into the apt-get update/upgrade habit over the years, but with a dozen servers, I found that servers I logged into less frequently weren’t staying as fresh. Especially with load-balanced machines, it’s important that they all stay up to date. Automated security updates scare me somewhat, but not as badly as unpatched security holes.

apt-get install unattended-upgrades

vim /etc/apt/apt.conf.d/10periodic

Update the file to look like this:

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Download-Upgradeable-Packages "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

One more config file to edit:

vim /etc/apt/apt.conf.d/50unattended-upgrades

Update the file to look like below. You should probably keep updates disabled and stick with security updates only:

Unattended-Upgrade::Allowed-Origins {
        "Ubuntu lucid-security";
//      "Ubuntu lucid-updates";
};

Install Logwatch To Keep An Eye On Things

Logwatch is a daemon that monitors your logs and emails them to you. This is useful for tracking and detecting intrusion. If someone were to access your server, the logs that are emailed to you will be helpful in determining what happened and when – as the logs on your server might have been compromised.

apt-get install logwatch

vim /etc/cron.daily/00logwatch

add this line:

/usr/sbin/logwatch --output mail --mailto test@gmail.com --detail high

All Done!

I think we’re at a solid place now. In just a few minutes, we’ve locked down a server and set up a level of security that should repel most attacks while being easy to maintain. At the end of the day, it’s almost always user error that causes break-ins, so make sure you keep those passwords long and safe!

I’d love to hear your feedback on this approach! Feel free to discuss on Hacker News or follow me on Twitter.

Update

There’s a great discussion happening over at Hacker News. Thanks for all the good ideas and helpful advice! As our infrastructure grows, I definitely plan on checking out Puppet or Chef – they sound like great tools for simplifying multi-server infrastructure management. If you’re on Linode like us, the above can be accomplished via StackScripts as well.

Categories
Tutorials

A user is stealing from us right now and I don’t mind

As I write this, some guy in Florida is using stolen credit cards to successfully steal tens of thousands of dollars of products from us. Or at least, that’s what he thinks he’s doing.

When someone steals, buys, or generates a credit card number with the intention of committing purchase fraud, the typical first step is determining if the card is valid. A stolen number runs the risk of being cancelled at any moment, and nothing stops a promising career in white collar crime in its tracks quite like a decline in the Walmart checkout aisle with $5000 of merchandise in the cart.

The preferred method then is to run a small online transaction on each stolen card. Once you’ve found a valid card number, you re-magnitize a card and the shopping spree begins! This is why if you’ve ever had your card stolen, you’ll almost always see a smaller test transaction at an online retailer before the large purchase at a retail store.

As an online retailer dealing in micro transactions (<$5), we have to be especially cautious about this form of credit card fraud. Most of our products aren’t especially tempting to fraudsters given their customizability (i.e. you can’t resell an Ink card) – but the low transaction amounts are ideal for testing stolen cards. Undetected fraudulent transactions result in chargebacks and rising merchant account fees.

My favorite way ( by far ) of combating this type of fraud is called the hellban. If you’re not familiar with the concept, it’s pretty straightforward and totally insidious: once a user is hell-banned, the site or app behaves normally for them – but none of their actions have any effect. It’s a popular method of forum moderation – if a user starts trolling your members or posting spam, you just hellban them. They’ll eventually give up on your site when no one seems to respond to their posts.

The same concept can be applied to credit card fraud prevention: a user who is hell-banned by our system (either through automated or manual means) sees their purchases go through (with some declines mixed in for realism) and receives ‘fake’ credits that let them buy products we never send. Of course, we’ve completely blocked all credit card transactions from going through at this point – protecting us from the liability of chargebacks.

Couldn’t you just delete the user account or ban their IP?

We sure could! This would effectively boot them off our system – but for how long? We are a tempting target for credit card fraudsters, and they expect to be banned for their bad behavior. They’d likely just switch to another VPN, sign up for another free account, and do it all over again, which means I now have another user account I need to hunt down and ban.

A hell-banned user as a rule sticks around for longer, all the while collecting especially poor empirical data on their credit cards. This in turn allows us to collect logs that are helpful in identifying them (and other fraudsters) in the future and reporting their activity to authorities.

Most importantly, it’s especially good sporting fun!

Continue the discussion on Hacker News and follow me on Twitter

Categories
Tutorials

Setting up MySQL replication without the downtime

I clearly don’t need to expound on the benefits of master-slave replication for your MySQL database. It’s simply a good idea; one nicety I looked forward to was the ability to run backups from the slave without impacting the performance of our production database. But the benefits abound.

Most tutorials on master-slave replication use a read lock to accomplish a consistent copy during initial setup. Barbaric! With our users sending thousands of cards and gifts at all hours of the night, I wanted to find a way to accomplish the migration without any downtime.

@pQd via ServerFault suggests enabling bin-logging and taking a non-locking dump with the binlog position included. In effect, you’re creating a copy of the db marked with a timestamp, which allows the slave to catch up once you’ve migrated the data over. This seems like the best way to set up a MySQL slave with no downtime, so I figured I’d document the step-by-step here, in case it proves helpful for others.

First, you’ll need to configure the master’s /etc/mysql/my.cnf by adding these lines in the [mysqld] section:

server-id=1
binlog-format   = mixed
log-bin=mysql-bin
datadir=/var/lib/mysql
innodb_flush_log_at_trx_commit=1
sync_binlog=1

Restart the master mysql server and create a replication user that your slave server will use to connect to the master:

CREATE USER replicant@<<slave-server-ip>>;
GRANT REPLICATION SLAVE ON *.* TO replicant@<<slave-server-ip>> IDENTIFIED BY '<<choose-a-good-password>>';

Note: Mysql allows for passwords up to 32 characters for replication users.

Next, create the backup file with the binlog position. It will affect the performance of your database server, but won’t lock your tables:

mysqldump --skip-lock-tables --single-transaction --flush-logs --hex-blob --master-data=2 -A  > ~/dump.sql

Now, examine the head of the file and jot down the values for MASTER_LOG_FILE and MASTER_LOG_POS. You will need them later:

head dump.sql -n80 | grep "MASTER_LOG_POS"

Because this file for me was huge, I gzip’ed it before transferring it to the slave, but that’s optional:

gzip ~/dump.sql

Now we need to transfer the dump file to our slave server (if you didn’t gzip first, remove the .gz bit):

scp ~/dump.sql.gz mysql-user@<<slave-server-ip>>:~/

While that’s running, you should log into your slave server, and edit your /etc/mysql/my.cnf file to add the following lines:

server-id               = 101
binlog-format       = mixed
log_bin                 = mysql-bin
relay-log               = mysql-relay-bin
log-slave-updates = 1
read-only               = 1

Restart the mysql slave, and then import your dump file:

gunzip ~/dump.sql.gz
mysql -u root -p < ~/dump.sql

Log into your mysql console on your slave server and run the following commands to set up and start replication:

CHANGE MASTER TO MASTER_HOST='<<master-server-ip>>',MASTER_USER='replicant',MASTER_PASSWORD='<<slave-server-password>>', MASTER_LOG_FILE='<<value from above>>', MASTER_LOG_POS=<<value from above>>;
START SLAVE;

To check the progress of your slave:

SHOW SLAVE STATUS \G

If all is well, Last_Error will be blank, and Slave_IO_State will report “Waiting for master to send event”. Look for Seconds_Behind_Master which indicates how far behind it is. It took me a few hours to accomplish all of the above, but the slave caught up in a matter of minutes. YMMV.

And now you have a newly minted mysql slave server without experiencing any downtime!

A parting tip: Sometimes errors occur in replication. For example, if you accidentally change a row of data on your slave. If this happens, fix the data, then run:

 STOP SLAVE;SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;START SLAVE;

Update: In following my own post when setting up another slave, I ran into an issue with authentication. The slave status showed an error of 1045 (credential error) even though I was able to directly connect using the replicant credentials. It turns out that MySQL allows passwords up to 32 characters in length for master-slave replication.

Update #2: An astute reader noted that he ran into a “MySQL server has gone away” error while running the initial dump. The solution he found was to add the following during the import on slave:

 [mysqld]
 max_allowed_packet=16M