Atwood: Yeah, I was going through my blog…
Spolsky: It seems like half of all sites would be broken.
Anyway, here’s the base configuration for my browsers these days:
If the security cable tie isn’t pulled tight engaging the teeth, it can be pulled right off. If it was secured, it would have been damaged while being removed (with scissors).
I did scrutineering at the last Victorian state election, and apart from the shocking level of informal voting and above-the-line voting, there was another shock.
Electoral fraud – or the possibility of it.
The nice thing about living in Australia is that we take our democracy seriously, and we balance being able to prove that what the outcome was with ballot secrecy. Nobody, no level of government or industry, no individual, will know how you voted without you telling them. Yet at the same time we can have confidence that our electoral system is not being rorted; our governments change back and forth, and each time it does representatives of both sides keep a close watch on the activities of the employees of the AEC and VEC, eyeballing each individual vote and knowing that they are all distinctly different from the others in spite of being a collection of handwritten marks on a slip of paper.
To minimize the risks of ballot box tampering, at the start of voting the ballot boxes (just big cardboard boxes here in Australia) are sealed shut with serialized cable-ties. An independent somebody witnesses this when an Electoral Commission employee does this (typically the first voters who wandered into the polling station), and their details are recorded (by details, I think that means signature, but it could be actually enough to track the person down afterwards) and they sign the form that records the sealing of those particular ballot boxes.
So how come they use cable ties that can be “done up” and yet the teeth don’t engage – thus leading to an unsealed ballot box? Is it too much to ask for a cable tie with teeth on both sides?
I should have kicked up a fuss, but it was a safe booth in a safe seat, and who needs the hassle?
Anyways, the reason I relate this story is that I’ve been seeing comments along the lines of “this is the 21st century, why they hell are we using pencil and paper?” Because, dickwads, computers don’t leave a fucking audit trail. There’s no scrutineering of electrons. How the hell are you meant to verify that Clive Palmer didn’t in fact get 98% of the vote? You can’t. Interesting that Clive Palmer owns the company that supplied all of the (suspiciously cheap) voting machines to the AEC, but that hasn’t got anything to do with it. And the cost! Pencils are 10c each, paper is about a cent a sheet. A shitty computer is $500, and requires a bunch of electricity. “Do it on the Internet, or use smart phones!” I hear you say. No, because while nearly everyone can move a pencil around, significantly fewer can use their computer to vote. And there’s no connection between how you voted, and the counting of votes. The announced result could be anything, and there’d be absolutely no way of proving it wrong. So, yes, computers are shiny and clearly the best way of implementing a voting system, if you want an electoral system you can’t actually trust.
So, looking at properties, and a number are down on the floodplain near the local moving body of water, a river/creek. I wonder to myself if the area is at any risk from floodwater; should I even bother looking at the area?
The council, being the government body most connected to the area, ought to know. It doesn't; it can't tell me except to tell me if a specific property has a flood-overlay, which says that modelling has determined that it is at risk of a 1 in 100 year flood.
What is the 1 in 100 year flood event?
The 1 in 100 year flood event is the storm that happens on average once every one hundred years (or a 1% chance of occurring in any given year).
Now, that means in any given year there's a 99% chance you're not going to get flooded. In 100 years, that means a 0.99100 or a 36.6% chance of not getting flooded. A 2/3 chance of having water washing through your home at some point there. Basically, that's a guarantee that in the next century your home will be damper than normal – because the 1 in 100 year events are calculated off historic data, not forward climate models. And the forward models say that things are only going to get more extreme; have you noticed how 1 in 100 year events seem to happen to the same place every decade or so?
In fact, pretty much anyone you talk to – water utilities for example – will only talk about 1 in 100 events. Vital government infrastructure (stuff that has to keep operating the event of a flood disaster, like hospitals and my home) has to be above the 1 in 500 line. From what I'm told, they calculate this on a site-by-site basis rather than having a map (they're not building a bunch of new hospitals, so it's easier that way). Sites aren't rated as being 1 in 110 year, you're either in the 100 year box or not rated at all.
The gist of what I was able to read into the subtext of the hints being passed in my conversation with a town planner specializing in flooding was: Floodplains get flooded, even in cities, even if there's a wetlands further upriver that could absorb a sudden influx of water, even if the sides of the creek are quite steep and the channel is surprisingly broad, and even if there are barricades; If you don't like that, don't live there.
So I won't. It makes searching for a home so much easier, even if the homes out of the floodplain are more expensive and built on those annoyingly sloped hill things.
Actually, this reminds me of the 1972 Elizabeth St Floods my Mum told me about getting caught in. I would never have guessed a major street in our CBD could turn into a river – and then it happened again in 2010.
Because of the interminable delay in page redirection on my grossly underspec’d netbook, I’ve added Don’t track me Google (which Chrome will download but then leads you to believe it won’t let you install, but if you click *->Tools->Extensions, then drag from the download bar onto the Extensions list will install just fine).
Because the Australian government seems increasingly intent to read my mail, I’ve gotten quite interested in preventing them doing so. Encrypted communications provide private browsing — what goes back and forth is a secret, but not who are having the conversation. The EFF’s HTTPS Everywhere (which works on Firefox, and kinda on Chrome) enforces a preference for SSL communications where available. However, in the real-world parallel to the electronic, that ensures that instead of my ISP being able to see me walk around the streets and then into glass-walled buildings, the buildings now become opaque. They still know what buildings I’ve walked into. The government wants to know what buildings I’ve walked into because… ummm… the building which has bomb-making instructions… we can prove… ummm… something. But now we’re safe! The ineptitude of the government’s censorship plans leaves me with no desire to allow random ISP and government employees to rifle through whatever-it-is-I-do-on-the-Internet whenever they feel like it.
As such, the next step is to start using an anonymising network; initially I2P seemed to be just the ticket. I2P is an unofficial top level domain, and under it you can find — amongst other things — eepsites, anonymously hosted web sites. Problem is, they serve HTML, and the pages could refer you off the .i2p TLD thus exposing your IP address (they might do this via a web-bug or something as innocuous as externally hosted CSS file). I2P is primarily a darknet, not an anonymising proxy; it’s an internet that doesn’t play by the same rules, and the effect is that no-one on it can identify anyone else on it (with some demonstrated exceptions). The I2P network seems to be populated by scary people and paranoid people. By far the biggest problem is that I2P doesn’t work very well for surfing the Internet, due to it’s limited out-bound connection (outproxy) to the wider Internet. Given the http://i2p.to proxy allows viewing this darknet from outside, there’s not much point running I2P unless you want to anonymously publish information.
So while I2P isn’t enough on it’s own to hide your identify online, it isn’t really enough anyway. I don’t want to wander the darknet, I want to be out in the light of the Internet using my Cloak of Invisibility. This is where the only (non-VPN) game in town comes in, along with all its demonstrated weaknesses: Tor. The Tor network is accessed via the TorButton plugin.
When using TorButton, to minimize your risk profile you can’t run random crap on your browser — you’ve got to just browse. As such, the Tor developers recommend you use TorButton with a bunch of other tools (many of which I’ve already mentioned), which are all helpfully bundled up into the Tor Browser bundle, a secured version of FireFox — not a plugin — that uses the Tor network. They’re also very down on embedded environments like Flash, Sliverlight, Quicktime, RealPlayer… you get the idea. In addition, those datafiles that carry active content — .DOC and .PDF — scare the willies out of them, and they want you to only open them once you’re disconnected from the Tor network.
In fact, they go so far as to recommend Tails running inside a VM, which means all your traffic goes via Tor. That seems to be the optimal solution.
Australian consumers can now use their Visa cards to pay for small value transactions of $35 or less without entering a PIN or signing a receipt, Visa announced today.
This requires the retailer to actively persue this strategy, but the payment network no longer demands identification for these “low value” transactions. They claim that security isn’t compromised by this. Their logic goes like this:
$35 isn’t much.
If someone steals your card, they can only obtain $35 worth of goods and services per transaction until the card is shut down.
Your card issuer will eventually notice all of these transactions and phone you to make sure everything is okay.
The retailer wears the risk of these unauthorised transactions
So what’s to stop your teenager borrowing your card to go buy snacks at McDonalds (one of the early adoptors of this security-flexibility) whenever they’re hungry? The card company’s logic goes like this:
$35 isn’t much.
If someone borrows your card without your knowledge, they can only obtain $35 worth of goods and services per transaction.
The retailer wears the risk of these unauthorised transactions
So why would a retailer run the risk of a month’s worth of Coles supermarket purchases (another early adopter) – which could easily exceed $1000 with one or two purchases a day – being fraudently run up? Because when you compain to your card issuer, they require a police report. The police, being a diligent lot, will follow up these $35 thefts, go to the stores, look at the video footage, realise they don’t know what you look like, come around to your house and compare the picture against you and decide it’s not you. Then they’ll think “How did this person who isn’t the cardholder get hold of the card and the cardholder didn’t notice until they got the bill?” and they’ll suspect an inside job, and ask you if you recognise the person in the video footage. If you want your teenager to have a crimal record with 30+ theft convictions you’ll scream “Sarah! Come here!” and that will be that; otherwise you might stay quiet.
Of course, it might not be your teenage daughter with the munchies; somebody at work might borrow the card from the wallet on your desk to buy lunch when they’ve run out of cash, or friends when you’re out “dining” at McDonalds.
Worse yet is the organised criminals who can easily prove their expenditure is not their own – it was in another state! Because there’s no motivation to Express Post your card to an interstate confederate for them to have a quick run around with it before Express Posting it back. In short order it can become quite a bill too – at Apple Stores it’s up to $150 without a signature being needed. These expenditures can be book-ended by legit local purchases, leading the card holder to say “well, I never authorized that, I’ve still got the card, so you figure it out”. The costs of these thefts, which all the video footage in the world isn’t going to connect to the cardholder, and with some precautions the confederate either, goes onto the general costs of running the retail operation, pushing up prices.
Retailers always had the option of skipping the need to sign for a transaction – be it on their own heads. So presumably they think that the video footage will reduce the level of experienced loss.
Now, presumably this fraud will cost less than the expenditure saved – assuming a check-out chick costs $25/hour to employ it implies at least 1.4 person-hours are saved per fraud, and assuming a saving of four seconds per transaction, they’re expecting no more than 1 fraud in 1280 transactions. But I ask: isn’t it better to pay $35 to Aussie Battlers… working Aussie families… our most valuable assets rather than hand over, say $30, to criminals through lax security?
With contactless payments finally with us, there’s even more reason to fear unauthorized transactions, per this video of a guy stealing the identifying information off a smart card:
It appears that in addition to annual fees, international conversion fees, interest charges and so forth, the price of a credit card is the same as freedom: eternal vigilance.
All of this is lovely and academic, but the activity by retailers and card issuers has the effect of turning every card in my wallet into many unchallenged $35 purchases. This acts as a motivator to steal my cards from me. If my wallet is stolen, I can immediately cancel the cards, so no risk there. So to get at the lovely $35 goodness, the thief needs to stop me doing that – clonking the victim on the head is a good way of preventing reporting. I like my head. I don’t mind spending 4 seconds a transaction to prevent a increase in people getting brained.
The worst part is there’s no way to opt out of this reduced security; I can’t say to Visa: “No, for my card, only pay money when a PIN is supplied.” It’s forced on everyone. I remember when these PIN things came out, and I was repeatedly assured that they were more secure than a signature, and I could assure them that it wasn’t – the damn PIN is encoded on the mag strip of the card (precisely copied in seconds!), and any fool can see you keying your PIN in. Now another layer of security has been whittled away, leaving… video investigation.
Sounds similar to these kinds of fake Windows anti-virus scans which you see around the place, and try to convince you to click and download an executable which will supposedly clean up your PC:
This type of thing reinforces the fact that no browser/platform is safe from malware, and that it’s important not to regularly run your account with Admin privileges on your PC.
Personally I reckon it wouldn’t hurt to have a setting in Windows (and other operating systems) that prevents running executables from any directory where the current (non-Admin user) has write-permissions, eg only letting them run programs that have been installed by an Administrator.
Does any OS offer something like that at the moment?
On the server, go to Control Panel, Administrative Tools, Local Security Policy
Local policies / Security options
Check out the Network Security LAN Manager Authentication Level option
If it’s set to “NTLMv2 response only” or similar, then change it to “Send LM & NTLM – use NTLMv2 session security if negotiated”
This MSKB article has some material on it: Q823659 — it’s helpfully buried with lots of other security policy settings. Look about two-thirds of the way down for “Network security: Lan Manager authentication level”.
If the policy is set to (5) Send NTLMv2 response only\refuse LM & NTLM on the target computer that you want to connect to, you must either lower the setting on that computer or set the security to the same setting that is on the source computer that you are connecting from.
Yes, I suppose I could work out how to change the client host to use NTLM V2. But I really don’t want to break anything else.
Oh, and the KB article almost gleefully notes something we saw when wrestling with this:
One effect of incompatible settings is that if the server requires NTLMv2 (value 5), but the client is configured to use LM and NTLMv1 only (value 0), the user who tries authentication experiences a logon failure that has a bad password and that increments the bad password count. If account lock-out is configured, the user may eventually be locked out.
This error can occur when connecting to a secure (HTTPS) server. It means that the server is trying to setup a secure connection but, due to a disastrous misconfiguration, the connection wouldn’t be secure at all!
In this case the server needs to be fixed. Chrome won’t use insecure connections in order to protect your privacy.
You may find that the site works in other browsers. This is because other browsers, unknowingly or intentionally, work around the broken servers. But this doesn’t change the fact that the servers have a glaring security hole and should be fixed.
This error message is triggered if the SSL/TLS handshake attempts to use a public key, smaller than 512 bits, for ephemeral Diffie-Hellman key agreement.
For website administrators
If your website has this problem, either:
1. use a 1024-bit (or larger) Diffie-Hellman key for the DHE_RSA SSL cipher suites, or
2. disable all DHE SSL cipher suites.
The Age article seems to assume that Citylink must use a 1024 bit key… but then, if the writer thinks Google Chrome OS is trying to compete with MS-DOS, it’s clear he may not be the most IT-savvy person.
My reading of the error is that it’s a combination of the DHE keu agreement and the small key that is the problem. I’m not a net security expert, but that’s what point 2 appears to be saying.
It’s certainly not the case, as implied in the article, that they must use a massive 1024-bit cipher key — I’ve just logged into the Commonwealth Bank’s site, and all is working fine with their 256 bit key.
While Citylink/Transurban might be whinging that they’ve done nothing wrong, given all the other secure sites I use with Chrome are working perfectly, the conclusion I come to is that indeed there is a misconfiguration on their end.
It’s important that they get this right. After all, one wouldn’t want personal information being transmitted insecurely. It could get picked up by a passing Google Streetview car doing packet sniffing!
Update 10:45am: The reference to MS-DOS has now been removed from the article, which now reads: an internet-infused operating system for computers that takes on Microsoft.
It also no longer says Only one browser was available… in 2000, but has been changed to say One browser was dominant.
Today, a colleague suggested the best mitigation I have heard so far: deploying a GPO disallowing the use of executable files that are not on the C: drive. This will work for most environments, and you really shouldn’t be running executables from USB drives and network shares anyway. We tested this solution against the vulnerability and it does in fact provide protection.
…which would be nice, but I’m buggered if I can find it in gpedit.msc.