I would like to suggest that the word “unprofessional” be struck from the dictionary – and anyone who uses it struck with a dictionary. It is a word which conveys no useful information or proposal for action, and is thus nothing but meaningless noise.
The purpose of communication is to adjust another person’s process of cognition. I’ve heard it said that “all communication is persuasion”, which is quite true – you’re trying to persuade someone to change what they think. We can consider the intention and effectiveness of an attempt to communicate in this light.
What is someone trying to achieve when they label a person or behaviour “unprofessional”? If we’re being charitable, we would probably say that they’re trying to highlight that something is bad, or could be better. However, just stamping our foot and saying “bad!” isn’t enough – it’s also important to provide some information that the recipient can act upon.
The problem with the word “unprofessional” is that it really isn’t specific enough on the subject of “what is wrong”. Have you ever had someone say something like, “your behaviour yesterday was really unprofessional”? They’re assuming you know what they’re talking about – and you might well have a reasonable guess – but what if you guess wrong? Should you never do anything you did yesterday, just in case that particular thing was unprofessional?
When I’ve caught myself thinking, “that was unprofessional”, of my own behaviour, or someone else’s, I think about what caused me to think that. Once I drill down into it, I usually come to the conclusion that what I really meant was, “I don’t like that”. Since I’m not paid to like things, that’s pretty much irrelevant as a reason to tell someone not to do something.
On the occasions when I come up with something more concrete, it is invariably a more useful expression than “unprofessional”. Things like, “it frustrates the customer”, or “it pisses off the person sitting in the next cube” are a much better expression of why something is bad than “unprofessional”.
I’d encourage everyone to keep a careful watch over themselves and those around them for use of the word. When you catch yourself saying it (or thinking it), examine your motives more closely. Whatever the more specific adjective is, use that instead. If it just comes down to “I don’t like that”, at the very least say that to the person you’re talking to. Don’t try and hang anything grandiose on your personal prejudices. You might come off as being petty, but at least you’ll be honest.
I am led to believe that splunkd (some agent for feeding log entries into the Grand Log Analysis Tool Of Our Age™) has no capability for running itself in the foreground. This is stupid. Do not make these sorts of assumptions about how the user will want to run your software. Some people use sane service-management systems that are capable of handling the daemonisation for you and automatically restart the managed process on crash. These systems are typically much easier to configure and debug, and they don’t need bloody PID files and the arguments about where to put them (tmpfs, inside or outside chroots… oh my) and who should update them and how to reliably detect that they’re out of date when they crash without causing race conditions and whether non-root-running processes should place their PID files in the same place and how do you deal with the permissions issues and… bugger that for a game of skittles.
In short, if you provide a service daemon and do not provide some well-documented means of saying “don’t background”, I will hurt you. This goes double if your shitware is not open source.
Anyone that has a fondness for good ol’ RSpec knows that there’s a fair
number of matchers and predicates and whatnot involved. Life isn’t helped
by the recent (as of 2.11) decision to switch to using
should will be going away entirely at some
point in the future).
There is a good looking RSpec cheatsheet out on the ‘net, but it dates from 2006, and things have changed since then. We’re using RSpec at work a lot at the moment, though, so our tech writer kindly updated it for the new-style syntax, gave it a nice blue tint, and put it out there for the world at large to use. Here is our updated RSpec cheatsheet for anyone who is interested.
I can tell you for certain that a double-sided, laminated version of this sucker looks very nice, and is a handy addition to the desk-of-many-things.
Do you have one (or a few) centralized database servers, either standalone or clustered, or do you spread the load like we are currently?
His argument for centralisation is one of easing the management burden of configuration and backups, whereas the distributed approach eliminates a central point of failure and performance degradation.
I go for distributed, all the way. For a start, we run so many databases for so many customers that there’s no way on earth we could stand up a small number of database servers and handle all the load (hell, we’ve got single customers who consume a cluster of machines with 384GB of RAM and all the SSDs you can eat). Security and permissions is a whole other kettle of fish; the contortions we’d have to do to allow customers the level of management they need with a centralised database system would be immense. Then there’s the need of some customers for MySQL, some for PgSQL, different performance tuning for different workloads… nope, centralised DBs don’t work for us.
Given this, we’ve bitten the bullet and solved pretty much all of the management problems. Installation and configuration is all handled via Puppet, and backups are trivial – the same system that installs the DB server itself also drops a hook script that the backup agent uses to know that it has to dump a database server. Monitoring that this backup is taking place successfully is also automatically provisioned, so we know that we’re not missing anything.
Ultimately, this same approach applies to practically anything that you’re tossing up between centralised and distributed. At scale, you can never rely on centralisation, so you may as well bite the bullet and learn how to do it distributed pretty much from the start. That saves some serious system shock when you discover what your hardware vendor wants for the next step up in big iron hardware…
Since there is absolutely zero Google juice on this problem, here’s some hints in case someone else is out there beating their heads on their keyboard in frustration.
The problem: when trying to define a storage pool (or 90+% of other
commands), you get this sort of result:
# virsh pool-define /tmp/pooldef error: Failed to define pool from /tmp/pooldef error: this function is not supported by the connection driver: virStoragePoolDefineXML
# virsh pool-create /tmp/pooldef error: Failed to create pool from /tmp/pooldef error: this function is not supported by the connection driver: virStoragePoolCreateXML
Not helpful at all. The problem is (or, at least it was for me) that I have both KVM and virtualbox installed (I prefer KVM, but vagrant uses virtualbox and I’m playing around with it). It would appear that libvirt is preferring to use virtualbox over KVM, which is stupid because virtualbox doesn’t appear to be fully supported (as evidenced by the extensive set of functions that are not supported by the virtualbox connection driver).
The solution: edit
/etc/libvirt/libvirt.conf, and ensure that the
following line is defined:
uri_default = "qemu:///system"
This will tell libvirt to use KVM (via qemu) rather than virtualbox, and you can play with pools to your hearts’ content.
… when it’s a “prediction”.
In the 4th January edition of the Guardian Weekly , the front page story, entitled “Meet the world’s new boomers”1 contained this little gem:
Back in 2006, [PricewaterhouseCoopers] made some forecasts about what the global economy might look like in 2050, and it has now updated the predictions in the light of the financial crisis and its aftermath.
Delightful. They made some forecasts about what the global economy might look like. Given that they clearly didn’t include any impact of the GFC in their forecasts, it clearly wasn’t a particularly accurate forecast.
Y’know what an inaccurate prediction is called? Guesswork. Let’s call a spade a spade here. I see this all the time, and it’s starting to shit me. People making predictions and forecasts and projections hither and yon, and they’re almost always complete bollocks, and they never get called on it. I read the Greater Fool blog now and then, and that blog is chock full of examples of people making predictions which have very little chance of being in any way accurate.
While Dr Ben Goldacre and others are making inroads into requiring full disclosure in clinical trials, I’m not aware of anyone taking a similar stand against charlatans making dodgy-as-hell predictions over and over again, with the sole purpose of getting attention, without any responsibility for the accuracy of those predictions.
Is anyone aware of anyone doing work in this area, or do I need to register
badpredictions.net and start calling out dodginess?
The ancient Egyptians were a pretty cool bunch, but their worship of cats really added something to their civilisation (double bonus: their word for “cat” was “mau”). The Internet itself, while undeniably a fantastic resource, reached new heights with the introduction of LOLCats. If you are cat-poor, you can swap your shabby tat for a tabby cat, while if you’ve gone a bit overboard you can sell your excess cats to cat converters.
However, cats have found minimal employment in systems administration. Until now. As the day job have been early adopters of btrfs, everyone at work has been very interested in the reported hash DoS of btrfs. It has been a topic of considerable discussion around the office. However, it can be a tough topic to explain to people less well versed in the arcana of computer science.
Not to be deterred, Barney, our tech writer, took the standard explanation, added some cats, and came up with an explanation of the btrfs hash DoS that your parents can understand. The density of cat-related puns is impressive.
(Incidentally, if you don’t need cats to understand btrfs hash DoS attacks, and live in the Sydney area, you might be interested in working for Anchor as a sysadmin).
I’ve been a wannabe GTD afficionado for some years. I’ve wanted to do it, but managing lists has always been something that has too much friction, overhead, or whatever. Finally, though, I think I might have found a way to manage lists that works.
My use-case isn’t unique, although I will concede I’m perhaps being more dogmatic than most. I want something that:
Is electronic (yes, the hipster PDA is a cool idea, but my handwriting is beyond woeful and I already carry enough crap in my pockets as it is);
Works offline (because I often do);
Will work on both my phone and laptop (because I want to have my lists with me when I’m not at my computer, but long-form data entry or manipulation on a phone is painful);
Makes it easy to add, browse, modify, and above all remove items; and
Is “mine” (no central databases I don’t control, proprietary apps on my phone, etc).
My previous attempt was a tool I called “tagnote” – it was a vim-outliner file full of hierarchically organised outliner entries, with tags inlined. It was a neat idea, but it wasn’t smooth to add/browse/delete items, and didn’t work with my phone at all (trying to use vim for any length of time on a bottom-of-the-range Android phone would kill me).
The current iteration, as the title of this post suggests, is a list manager that entirely uses e-mail. It really is a perfect symbiosis:
I want lists of text items with titles (
Subject:), potentially other metadata (
X-Whateverheaders), and possibly some notes (the body of the e-mail);
I need to be able to browse and remove completed items (that’s what e-mail clients are for);
Getting new items into the system is trivial (Anything I send to my PDA’s e-mail address goes straight into the INBOX, which I can then process as time permits); and
Syncing between my laptop and PDA is as simple as offlineimap and K-9 mail.
So what have I got, exactly? It’s fairly straightforward:
An IMAP account on my existing mail server;
A slightly tweaked copy of mutt (different colours so I don’t confuse myself, and a different layout of the index page to get rid of unnecessary columns);
Another offlineimap account;
Another K-9 mail account;
Note2self (a neat little app to take a typed or voice-transcribed note and e-mail it to a pre-set address) on the phone, pre-programmed to e-mail any notes I write to the PDA’s inbox; and
A small shell script to make it trivial to create new lists (which has to be done on the IMAP server for reasons of offlineimap), add new items to a list, sync my lists (in other words, “run offlineimap”), display my lists (in other words, “run mutt”), and process my “tickler” file.
That last point is the one I’m really happy I achieved. I’ve always been a
fan of “hide it until you need it”, but my previous system didn’t let me do
that. Now, though, I have a separate list called
tickler, and all the
items in there have an
X-Tickle header, which specifies the date I want to
see them. Each night a cronjob runs through the tickler and moves anything
for today into the INBOX. An
X-Tickle-Repeat header lets me have things
that repeat over and over again.
So in short, using entirely open-source tools and a couple of hours of my time doing things I enjoy anyway (shell scripts! woo!), I’ve now got a list manager that doesn’t get in my way more than it absolutely has to. We’ll see how long I last this time before I feel the urge to “improve” my lists again.
Don’t spend the first two minutes of the first episode of your podcast telling everyone what a “micropodcast” is, and how iTunes only lets you have 20MB per episode. The only exception to this might be if you were making a podcast about podcasting. Which you weren’t.
That is all.
Reading documentation does pay off. Browsing through the Rakefile format
Rake, just now, I found mention of the
multitask method – which declares that all of that task’s prerequisites
can be executed in parallel.
A comparison run:
$ rake clean; time rake build real 0m7.116s user 0m6.788s sys 0m0.260s $rake clean; time rake multibuild real 0m3.820s user 0m8.809s sys 0m0.288s
This is a trivially small build I’m doing, I must admit, but halving the build time (in this case at least) pays huge dividends in my perceived productivity. It really blows the dust out of my CPU cores, too, which tend to be woefully underutilised (being this is a quad-core laptop and all).
So I say unto you all: go forth and