Tuesday, 2 September 2014

The Great Vacuum Cleaner Rip-Off

Recent legislation has banned the manufacture or import of vacuum cleaners with a motor power over 1600 watts.  And predictably, idiots who don't understand physics are complaining.

The simple fact is, that manufacturers have been ripping you off.

Let me explain.

A vacuum cleaner turns electricity from the mains into kinetic energy which it imparts to the stream of moving air  (which is what you want it to do).  It also turns electricity into heat  (which you don't want it to do)  and sound  (which you can't avoid, but it isn't much anyway; a screaming toddler is producing about a watt of sound energy.  Yes, loudspeakers really are that inefficient).

You can measure how fast KE is being imparted to the air stream by multiplying the volume flow rate by the pressure change.  Assuming you use the fundamental units throughout, you get metres cubed per second, times Newtons per metre squared = Newtons-metres per second = Joules per second = Watts.  That figure is all the dirt in your carpet cares about.  The difference between air-watts and electrical watts is what is getting wasted.

From the vacuum cleaner manufacturer's point of view, since you already have a moving air stream, you can use this for cooling the motor; so overheating really isn't a huge deal.  You can therefore make your motor inefficient -- using copper wire that is too thin, and/or not enough steel to leave any molecules un-magnetised right through the crests and troughs of each cycle of the mains -- so it turns some electricity into heat that otherwise would have been turned into KE.  Also from the vacuum cleaner manufacturer's point of view, a bigger number on the advertisement attracts more buyers.  So it's actually in the manufacturers' interest to make vacuum cleaners as inefficient as possible, just so as to be able to claim big numbers!

And people have fallen for this hook, line and sinker.  I despair.

(My proposed solution would be to require standardised testing of vacuum cleaners, including ability to extract different types of dirt from different surfaces, percentage efficiency as air-watts / electrical watts and expected lifetime of product.  Of course this will make additional work for the manufacturers; but we consumers outnumber them, and the law is supposed to work for us .....)

Thursday, 24 July 2014

Finally, as promised, eventually: Fixing Akionadi over NFS

Firstly, apologies for the great lateness of this post.  I could say it was due to personal circumstances &c.; but I'll tell the truth for once, and say it was just me being a dizzy mare.

Anyway.  The scenario is this.  We have several clients running KDE, and using an NFS share for their home folder.  By default, Akonadi wants to run its own MySQL server, as the individual user, with data in the user's home folder.  Which means that every time a record needs to be modified, we have to lock a file  (to make sure only one process is going to modify it at once);  alter it; and then unlock it.  This is creating a lot of network traffic, as compared to just sending a query over the network, and it also has potential problems if the timing of all these messages gets messed up.

So the logical conclusion is, we need to have a central MySQL server for every user's Akonadi data, so the traffic is just queries and responses and we aren't relying on locking files on a remote server.  (If you read the first part of this story, you'll remember that this particular use case is actually violating assumption #3; this is a more complicated case than a simple home user, and there is someone in charge who ought to know better.)

Saturday, 14 June 2014

Decisions that Bite, #269

Sometimes, a bunch of decisions, each of which – considered in isolation – is perfectly sound and rational, can come together to bite you in a really nasty place.

1.  Database servers are usually configured with their own local disk drives, so they can get the maximum throughput supported by the motherboard, rather than storing data on a remotely-mounted network share.  Acquiring and releasing file locks, which is something that databases have to do a lot, is also faster and more reliable using local hardware to which the kernel is talking directly, rather than by sending requests and acknowledgements across the network where it contributes to the overhead.

2.  You don't always want to have to think about file formats in too much depth.  Sometimes, what you have to store lends itself naturally to a database.  Someone else has already done the hard work, implementing the storage engine.  All you need to concentrate on is getting the data in and out of the application.

(Database servers also tend to be surprisingly light on resources; because traditionally, they have had no alternative but to have been.)

Using a ready-made database server to store your data can make better sense than devising your own ad-hoc file format that may limit you in future in ways that, by defnition, you haven't thought of.

3.  Desktop users shouldn't need to be intimately familiar with the under-bonnet workings of a database merely in order to use their computers.  If a database server is used by an application, its configuration should be managed by the application itself.  This is especially so if the application makes use of esoteric configuration options.

Starting a specific instance of the database server, just for that one user, with a configuration decided by the application and so specifically suited to it, can make better sense than trying to use an existing system-wide server.  (You don't have to be root to run a server, if you listen on a non-privileged port and take care not to write anywhere you aren't allowed.)

4.  A network with multiple users running a limited range of desktop applications  (mainly a web browser and an e-mail client)  would be really, really great if it had the ability for any user to log in at any workstation and have their own home folder.  You can rotate more staff than you have workstations, through any combination of shifts; and know that, as long as there are enough workstations for every team member, everyone will be able to access their own home folder.

So, it makes sense in such a situation to use NIS to handle user logins, and mount /home on an NFS share.

Four eminently sensible decisions, taken in isolation.  But put them all together and what you get is, multiple databases in users' home folders on the same NFS share.

This only became apparent when I upgraded a bunch of desktops to Debian Wheezy.  (I'd resisted Squeeze for long enough already, because of the move to KDE4.  KDE3 was good enough for most of what we needed – it's just that awkward "most of" that's the problem.)

Now, Wheezy uses KDE4, as did Squeeze.  KDE4 includes Akonadi, which uses a MySQL database to store data.  And it starts an instance per user, configured especially to use a data store under that user's home folder.  The newer applications in Wheezy make heavier use of Akonadi's database functionality.

But Wheezy also has MySQL 5.5, whose default database engine is InnoDB – as opposed to Squeeze's 5.1 using MyISAM by default.

And this is where the trouble begins; because InnoDB tables don't play nicely over network shares.  NFS is too busy locking and unlocking files to serve much data; and very occasionally, messages get out-of-order and a lock or unlock is missed, resulting in slowdown waiting for the lock to timeout or – worse – data corruption if a write occurred while a file was supposed to have been locked.

So when four or five users on the same network share all try to start up their Akonadi servers at exactly the same time, the result is a packet jam not quite of VoIP-degrading proportions, but certainly getting on for it.  All because of an unfortunate concatenation of otherwise seemingly-innocuous circumstances.

Fortunately, there is a happy ending.  I'll explain all in the next post .....

Wednesday, 26 March 2014

An Anniversary!

Today is the anniversary of the installation of solar panels at Montoya Mansions!

In my first year, I have generated 1636 kWh. This is comfortably more than the predicted 1400 kWh for my first year's production, and alters the payback threshhold in my favour: if the price of electricity exceeds 14.042 p per unit, my solar setup has cost less than the electricity it will generate over 25 years (accounting for deterioration).

The timing of my feed-in tariff payments has fallen nicely, such that each payment is due almost exactly halfway between an equinox and a solstice; thus, they correspond to the seasons as counted by duration of daylight. My first payment was £117.75 on 3 August; then £73.88 on 3 November. The sun could barely be bothered to rise in Winter, leading to a dire £29.12 on 3 February (although it still more than covered the £1.82 per week standing charge). Come 3 May, I expect to receive another £70 or 80 for the Spring (this payment and the Autumn one should be fairly similar, by symmetry: the graph of day length vs. day of the year is sinusoidal).

When I get my storage batteries and UPS plumbed in, I should be able to save even more, as I won't need to pay for all the electricity I will be using in the evening .....

Sunday, 16 March 2014

Great news from the Independent on Sunday

The Independent on Sunday have announced a new policy of refusing to review children's books which are aimed only at boys, or only at girls:


Which is A Good Thing. We don't need any more glittery pink books telling girls they should aspire to be princesses or fashion models; nor trashy, dumbed-down books for boys that ultimately only reinforce the idea that reading is for girls.

Thanks to Let Toys Be Toys for breaking this news.

Sunday, 16 February 2014

You can't run that from batteries!

Recently, I have managed to acquire a used, APC 3000 VA uninterruptible power supply – minus batteries.  (APC branded battery packs cost almost as much as a new UPS.  Generic sealed lead-acid batteries of the same capacity can be bought from the likes of http://www.tayna.co.uk/ for a fraction of that.)  The UPS is basically a self-contained battery pack, charger and inverter.  It converts DC from a bank of batteries, which are ordinarily kept charged from the mains, to AC when the mains fails.  I plan to use this, in conjunction with a bank of large batteries, to implement a solar energy storage system.

Now I have managed to pick up some batteries.  Although they are not the right ones for this UPS, they are "just about" compatible – the right voltage, but the wrong capacity  (there would normally be two series chains of four 12 V, 5.5 Ah batteries, in parallel for 48 V / 11 Ah; I have just one series chain of four 12 V, 7 Ah batteries, for 48 V / 7 Ah).

These batteries are rather used, and have a remaining usable capacity somewhat lower than advertised – but they cost nothing, which is always a point in favour.

And of course, having some batteries, we can test out the UPS!

You can see clearly that these aren't the right batteries -- I have had to stand the battery tray on top of the UPS because these are too tall.  The lamp was from an earlier test; look out for the black extension lead heading off the bottom left corner.  The 4-way extension lead is fitted with a "C20" plug for the UPS output (which can be up to 3000 VA, which is more than the usual "C14" / kettle-type can handle.)  The other end goes to .....

This microwave oven!  It's cooking just the burger from a microwave cheeseburger.  I toasted the cob separately, and added my own special tomato and herb sauce.

I was worried that the microwave would pull down the battery voltage far enough to trip the undervoltage cut-out in the UPS –this was what happened with a 2 kW kettle.  Fortunately, the slow-start action of the magnetron filament heating up was enough to allow the UPS batteries to recover.

This is a good sign.  And the nice people at http://www.tayna.co.uk/ are very helpful.  They certainly know their batteries.

Wednesday, 1 January 2014

Staffordshire Oatcakes

These are a kind of pancake that don't actually taste of anything.  The idea is that they can be served wrapped around literally any sweet or savoury filling, since there is no possibility of a flavour clash; and the texture of the oatcake brings out the flavour of the filling.

They are ideal for travelling, because you can wrap them up in paper in such a way as to be able to take a few bites and re-wrap every so often.  And despite the fact that oatcakes can be surprisingly filling, mini-oatcakes with ice cream also make a great dessert after a barbeque.

My parents and three of my grandparents were born in Stoke-on-Trent, even although I wasn't born there.  That does not stop me appreciating the Food of the Gods, at any rate, and I think I have the heritage.
  • 250 g. plain flour
  • 250 g. oatmeal
  • 500 ml. warm water
  • 500 ml. milk
  • 1 sachet (7 g.) bread machine yeast
Mix liquid, which should be at body temperature or maybe slightly higher; don't exceed 40 degrees.  Add dry ingredients.  Mix thoroughly for 1 minute, then cover loosely and leave to stand for 40 minutes to 1 hour until it has frothed up and then steadied out a bit.  Mix again for another minute.

Heat a large, heavy-based frying pan, melt a lump of butter over it and ladle in a dollop of the oatcake batter.  Shake the pan about to cover the whole of the base and keep frying, watching the clock.  As soon as the oatcake is beginning to come loose from the pan, note how much time has elapsed  (about a minute with a 30 cm. pan; maybe 90 seconds if the oatcake is really thick)  and time this long again.  By this time the top side should be hardening all over and should be full of 3 - 4 mm. bubble holes.  Turn over the oatcake.  You can toss it if you are experienced with pancake-tossing; otherwise, slide the oatcake sideways onto a plate, then pick up the plate and invert it over the pan.  Note, the bubble holes in the side cooked first will be a lot smaller, mostly under 1 mm.  Cook for as long on this side as you did on the other side, then slide sideways onto a plate.

As a guide, you should get 6 - 8, 30 cm. oatcakes out of this much mixture.  It's best to add fillings, roll up and serve at once -- even microwave awhile if needed, to make sure the fillings are properly hot.  But you can allow the oatcakes to cool  (put a layer of kitchen foil, greaseproof paper or similar between each one and the next to avoid them sticking together)  and serve later.  They will keep for a few days in the fridge.

Fillings:  Oatcakes were basically a lunchtime meal for the Staffordshire miners; so fillings such as bacon, mushroom and cheese or scrambled egg, perhaps with chopped sausage, would have been popular.  Or use something else that comes out of Staffordshire -- just down the A50 via Uttoxeter to Burton-on-Trent this time, find a nice real ale from a local brewery and cook up some cheap steak in an ale gravy with onions and mushrooms.  Thicken the gravy enough, and you can almost eat a steak and ale oatcake bare handed.  (But it probably would be even more delicious served on a plate with chunky hand-cut chips, petits pois and corn-on-the-cob.)  Leftovers can probably be wrapped up in an oatcake, too.

And while an oatcake doesn't make a very good substitute for a pizza base, spread with the sauce and with cheese and pizza toppings rolled up in the middle it makes a different kind of meal altogether.

Dessert oatcakes to be served with a sweet filling probably are best made in a smaller frying pan.  Fill with any combination of fruit, jam, jelly, custard, whipped cream, ice cream, grated chocolate and sticky sweet syrupy liquids and sprinkles you like.

If travelling, you may even want to prepare an oatcake with mostly a savoury filling, but then a bit of sweet filling -- jam or fruit compĂ´te and custard, say -- at one end. You can cut off a piece of oatcake to make a barrier between the sections.  Roll up, wrap in paper, mark which end to open first and there's main course and pudding in one!