Unix might be considered to be a successful operating system now, dominating the server side market via Linux and increasingly in peoples pockets through Android and iOS.
Conceived at the birth of big timeshare systems, Unix contains many design decisions that were acceptable then, but now seem somewhat archaic, especially when we compare them to the actual use now. I'll focus on one specific aspect - multiple applications per Unix instance.
A very common (2011) paradigm is to deploy multiple Unix instances per hypervisor, with the hypervisor taking the lions share of responsibility for scheduling and isolation. The irony lies in these being the specific things that Unix is supposed to be good at!
I personally lay the blame at the feet of cowardly system administrators unwilling to take a risk with an operating system they claim to love, retreating into the mystical protection of a hypervisor. Or at the feet of "best practice" which leads to avoiding anything approaching accepting adult responsibility for what happens on a machine.
A more charitable view is that the nature of workloads actually changed between the 1990s and the 2000s, as did the nature of the machines that ran them and the expectations of the cost of those workloads. The fact remains that if we are now using purpose built hypervisors to offload responsibility from the client OS, what is that client OS now doing?
As alternatives - BEA had a tilt at running Java straight on hypervisor supervised virtual machines, IBM's VM OS has a very light weight OS (CMS) that runs on its hypervisor and there's no shortage of smaller, lighter implementations of Unix like OSes. Amazon via Elastic Beanstalk and Google through their app server have shown us that we probably don't care in the end what is actually running underneath our apps.
I quite like aspects of the Unix architecture - but its a general purpose OS thats being increasingly used for a single purpose. If you can't trust it to isolate processes from each other and prioritise those processes, what can you trust it for?
So, Unix on hypervisors, what have you done for us, lately?
Wednesday 26 October 2011
Monday 10 October 2011
Quality
I remember rather vividly a particularly poor lecture from university on Quality. The lecturer was asking for what constituted quality. At that stage I took (and still take) a fairly dim view of the unqualified use of the term of quality, since it has a fairly specific meaning in traditional English roughly akin to "posh". I pointed out that quality, to me, meant a brand like Rolls Royce. I got laughed at but you can see the consequences of the unqualified use of the term "quality" all around us in IT.
Quality control is meant to establish a minimum acceptable standard for a given item, with testing at a statistically set frequency to gain a level of certainty that an item meets those standards. But that isn't what was discussed during the lecture - there was a lot of talk about items expecting to work all the time.
Certainly in the 15 years since then, I've seen quality usually equated with testing, be that manual testing, automated testing or unit test coverage. It is hard to argue that all these activities aren't desirable in the production of high quality code, but its again somewhat missing the point.
The use of quality unqualified is the topic of Zen and the Art of Motorcycle Maintenance which did come up in the lecture, and the lecturer confessed to not having read. If he had, he would have been more open to my Rolls Royce definition. Each of us as an intrinsic understanding of what quality is and it extends past listing off a description of the properties of the product.
The most common example nowadays might be the use of Apple products - you could take the same base set of components and build them into an Apple MacBook or a Windows based laptop. Both will have passed quality control. Both have operating systems that generally work. Both will cost roughly the same amount. One will usually be described as being higher quality than the other. Not everyone will agree on which one is which.
On a coarser example, given the same set of ingredients, using the same recipe, 2 people may come up with altogether different results. The resulting dish might be identical when come to being described but will have an intrinsic difference in terms of unqualified quality.
This leads us to the interesting area of when an item can pass all its quality checks but be of undeniably poor quality - often that it just doesn't feel "right". In an industry that loves measurable quantities (or at least professes to) this is a difficult message to get across. The question is then how do we determine quality? I don't believe that the answer can found by spending more time on work - calligraphy masters take their brush and sweep past a paper once to create a masterpiece. Nor is it to establish a hierarchy of masters and apprentices - the greatest examples of genius can come from outside formal structures.
Perhaps its as simple to be open about feelings as to quality, and to know that by practicing things that aren't masterpieces or obsessing about the detail on everything that when the call really comes we will be ready to do truly quality work.
Quality control is meant to establish a minimum acceptable standard for a given item, with testing at a statistically set frequency to gain a level of certainty that an item meets those standards. But that isn't what was discussed during the lecture - there was a lot of talk about items expecting to work all the time.
Certainly in the 15 years since then, I've seen quality usually equated with testing, be that manual testing, automated testing or unit test coverage. It is hard to argue that all these activities aren't desirable in the production of high quality code, but its again somewhat missing the point.
The use of quality unqualified is the topic of Zen and the Art of Motorcycle Maintenance which did come up in the lecture, and the lecturer confessed to not having read. If he had, he would have been more open to my Rolls Royce definition. Each of us as an intrinsic understanding of what quality is and it extends past listing off a description of the properties of the product.
The most common example nowadays might be the use of Apple products - you could take the same base set of components and build them into an Apple MacBook or a Windows based laptop. Both will have passed quality control. Both have operating systems that generally work. Both will cost roughly the same amount. One will usually be described as being higher quality than the other. Not everyone will agree on which one is which.
On a coarser example, given the same set of ingredients, using the same recipe, 2 people may come up with altogether different results. The resulting dish might be identical when come to being described but will have an intrinsic difference in terms of unqualified quality.
This leads us to the interesting area of when an item can pass all its quality checks but be of undeniably poor quality - often that it just doesn't feel "right". In an industry that loves measurable quantities (or at least professes to) this is a difficult message to get across. The question is then how do we determine quality? I don't believe that the answer can found by spending more time on work - calligraphy masters take their brush and sweep past a paper once to create a masterpiece. Nor is it to establish a hierarchy of masters and apprentices - the greatest examples of genius can come from outside formal structures.
Perhaps its as simple to be open about feelings as to quality, and to know that by practicing things that aren't masterpieces or obsessing about the detail on everything that when the call really comes we will be ready to do truly quality work.
Tuesday 27 September 2011
Heroes in computing
I've kind of gone through life with a leery skepticism of hero worship, but I do have at least one person who is a genuine hero to me. I'm not sure whether it was just one conversation or many over the years, but Phil Steele of TAB NSW/Limited helped form my actual knowledge rather than anything I had picked up before.
The conversation was about the use of an old IBM mainframe OS - DOS/VSE (probably VS when they actually made the decision). VSE was limited to running a few partitions - the equivalent of a few threads or processes in contemporary terms. The much more expensive MVS would allow more - and the logic was that it would therefore do more and was worth the price jump.
However, TAB was not a rich organisation and could only afford a few disks and a CPU per mainframe. The semantics of the operating system at the time meant that if you performed a disk write you would be blocked until the write completed. And naturally a CPU could only do one thing at a time. Therefore the number of processes you needed to saturate your kit is thus a fairly straightforward formula of (number of disks + number of CPUs).
This saved TAB money and lead down a particularly efficient path. Today the maths is a little more complicated but the principles are the same - something that is worth holding close in the pursuit of performance.
Phil's crystal clear explanation - which I think I've mangled - was a key click moment for me - forcing me to think hard about how things actually worked.
Phil Steele - my personal computing hero.
The conversation was about the use of an old IBM mainframe OS - DOS/VSE (probably VS when they actually made the decision). VSE was limited to running a few partitions - the equivalent of a few threads or processes in contemporary terms. The much more expensive MVS would allow more - and the logic was that it would therefore do more and was worth the price jump.
However, TAB was not a rich organisation and could only afford a few disks and a CPU per mainframe. The semantics of the operating system at the time meant that if you performed a disk write you would be blocked until the write completed. And naturally a CPU could only do one thing at a time. Therefore the number of processes you needed to saturate your kit is thus a fairly straightforward formula of (number of disks + number of CPUs).
This saved TAB money and lead down a particularly efficient path. Today the maths is a little more complicated but the principles are the same - something that is worth holding close in the pursuit of performance.
Phil's crystal clear explanation - which I think I've mangled - was a key click moment for me - forcing me to think hard about how things actually worked.
Phil Steele - my personal computing hero.
Wednesday 17 August 2011
Job Description or Job Requirements?
I've spent - we all have - thousands of hours in front of whiteboards or editors designing out in great detail what a system needs to do. We all know full time business analysts who's entire existence is based around the detailed description of the way systems need to work.
Why don't we do the same for the people we are trying to hire?
Apart from the fuzziness of dealing with humans rather than computers - pesky people - otherwise very structured people tend to fall apart around job descriptions. For years I've described what I do as "stuff" rather than going into any greater detail, just because of the effort required to break it down and explain it.
With 75% of programmer jobs being largely putting pretty front ends on databases and a lot of system administration being similarly dull why would a prospective employee plump for your company rather than somewhere else? Every company has interesting, unique problems and maybe we should be taking some time to write them down.
Having had the unusual luxury over the last few months of writing down requirements for outsourcers and then having to code part of that system I was reminded of something I'd forgotten:
Kind of obvious, but easy to forget. When I sat down to write this post I had this in mind since it saved so much time in my previous lives. For some positions I'd had a job description (even if I hadn't shared it completely until after the person was hired) that I'd carried from pre-recruitment through to their first annual assessment. Even though it had cost me in each case significant time, sometimes a bit of sleep and significant coffee it had made those follow up tasks trivial.
This contrasted strongly with those people (I'll say sorry now) where the JD had been unrelated to the actual criteria that they were then to be assessed on annually. These had always ended up in a mess where the review was a compromise and didn't actually tell anyone - the employee, me or the company - how they'd done that year.
Just like a project's requirements a Job Description isn't set in stone and can adapt over time. But just like that project it helps a lot if you've done some of the analysis up front.
Why don't we do the same for the people we are trying to hire?
Apart from the fuzziness of dealing with humans rather than computers - pesky people - otherwise very structured people tend to fall apart around job descriptions. For years I've described what I do as "stuff" rather than going into any greater detail, just because of the effort required to break it down and explain it.
With 75% of programmer jobs being largely putting pretty front ends on databases and a lot of system administration being similarly dull why would a prospective employee plump for your company rather than somewhere else? Every company has interesting, unique problems and maybe we should be taking some time to write them down.
Having had the unusual luxury over the last few months of writing down requirements for outsourcers and then having to code part of that system I was reminded of something I'd forgotten:
Doing some up front analysis really helps
Kind of obvious, but easy to forget. When I sat down to write this post I had this in mind since it saved so much time in my previous lives. For some positions I'd had a job description (even if I hadn't shared it completely until after the person was hired) that I'd carried from pre-recruitment through to their first annual assessment. Even though it had cost me in each case significant time, sometimes a bit of sleep and significant coffee it had made those follow up tasks trivial.
This contrasted strongly with those people (I'll say sorry now) where the JD had been unrelated to the actual criteria that they were then to be assessed on annually. These had always ended up in a mess where the review was a compromise and didn't actually tell anyone - the employee, me or the company - how they'd done that year.
Just like a project's requirements a Job Description isn't set in stone and can adapt over time. But just like that project it helps a lot if you've done some of the analysis up front.
Tuesday 16 August 2011
The right brief
When I first started looking around for jobs around a (long) while ago I was always perplexed why there was such an emphasis on X years experience in different tools and technologies. It was good to have a giggle about the request for 5 years experience in a system that had only existed for 2. And we've all seen the ads for someone with a raft of experience for less than money than you'd get filling shelves at a supermarket.
So I've written job descriptions not too far off that, even though in my own head I should know better - 4 years of this or 10 years of that. And they were never very effective. We ended up going through CV after CV that technically met the criteria but weren't right.
The best results always came from briefing agents and saying all the things that weren't on the job description - all the soft things that were really hard to write down, the real job description. I've worked with some stunning agents but the best of the best have always been the ones that knew the candidate and knew the job.
An answer should have been just to write down all those soft things as a job description that were being missing out on. But then I got no CVs through from agents. The annoying fact was that the tools those very good agents were using just didn't lend themselves to sort of tools that the agents were using to scrounge CVs in the first place.
The irony of all those thousands of job ads demanding x years experience is often because we can't describe accurately the sort of skills and experiences we actually want. As laughable as 20 years in .Net is or irrelevant as 10 years in mobile they are the shorthand we use for describing the sort of person we are after. Annoyingly, given this to work with the employment industry has optimised their tools for just this sort of vagueness and have done it admirably well.
We should be giving the agents a decent chance though - proper briefings, verbal or written, increase our chances of finding the right candidate with the right skills.
So I've written job descriptions not too far off that, even though in my own head I should know better - 4 years of this or 10 years of that. And they were never very effective. We ended up going through CV after CV that technically met the criteria but weren't right.
The best results always came from briefing agents and saying all the things that weren't on the job description - all the soft things that were really hard to write down, the real job description. I've worked with some stunning agents but the best of the best have always been the ones that knew the candidate and knew the job.
An answer should have been just to write down all those soft things as a job description that were being missing out on. But then I got no CVs through from agents. The annoying fact was that the tools those very good agents were using just didn't lend themselves to sort of tools that the agents were using to scrounge CVs in the first place.
The irony of all those thousands of job ads demanding x years experience is often because we can't describe accurately the sort of skills and experiences we actually want. As laughable as 20 years in .Net is or irrelevant as 10 years in mobile they are the shorthand we use for describing the sort of person we are after. Annoyingly, given this to work with the employment industry has optimised their tools for just this sort of vagueness and have done it admirably well.
We should be giving the agents a decent chance though - proper briefings, verbal or written, increase our chances of finding the right candidate with the right skills.
Subscribe to:
Posts (Atom)