Category Archives: Security

Chrome starts blocking web sites using HTTPS over SSL3

It would seem some organisations still haven’t got the message on SSL vulnerabilities, even one with a publicity-friendly name like Poodle.

For instance, Swinburne University of Technology, which is actually one of Australia’s better universities to learn computer science, has its student portal still trying to use SSL 3.

MySwinburne SSL error

My son was trying to figure out why he couldn’t connect with Chrome. Only by clicking for details do you get the slightly cryptic error: “ERR_SSL_FALLBACK_BEYOND_MINIMUM_VERSION”

It turns out Chrome has disabled fallback to SSL3. For now you can override it (though it’s easier for now just to use another browser), but soon it’ll be disabled completely. Site owners will need to make sure their servers support TLS instead.

They’ve also started giving a warning on SHA-1 certificates — no more green logo; it’s gone yellow, with a warning: “This site is using outdated security settings that may prevent future versions of Chrome from being able to safely access it.” Again, it’s up to site owners to resolve this, by updating their certificates.

Airliner shootdowns ought to be technically impossible

Using missile to shoot down an airliner ought to be made impossible.  It may be a lack of imagination on my part, but I can’t think of a circumstance where a military force needs the ability to shoot down civilian aircraft.  There aren’t a lot of manufacturers of surface-to-air missile systems, regardless of their level of sophistication and range – shoulder launched or vehicle-mounted – so changing those designs to prevent civilian shootdowns ought not be a big deal. Admittedly there are many more means of bringing down aircraft beyond SAMs, but not a lot of them have the reach to bring down cruising airliners.

Civilian airliners have carried IFF transponders since World War II, so there’s the infrastructure in place already for the identification of non-military aircraft.  Furthermore, it’s a violation of Article 37 1.c of the Geneva Conventions to pretend you’re a civilian – that is, it’s a war crime with all the international condemnation that goes with that, so it’s reasonable to make weapons that refuse to down aircraft that identify themselves as civilian.

So, why is this still happening?

Tap and Go causes crime: duh

Ken Lay says that in the last year in Victoria, 11500 extra crimes caused by Tap and Go cards have meant that the crime rate in Victoria has gone up (5%) rather than down.  These additional “crimes of deception” and are apparently tying up police.

It’s slack. Totally slack. There’s no control over it. And what are we finding? There’s been a huge spike in different offences committed to facilitate it; cars being broken into, mail stolen, handbags grabbed, purely because of industry introducing a new practice without any regard to security.

We have taken the view we should be taking on industry over this because our concern is they’ve introduced new practices with no regard to the implications on security and there’s no prevention measures, which is at times bogging down our members in work and time that could be better spent on some really serious type of investigations or responding to critical issues.

Assistant Commissioner Stephen Fontana

And the ABA says “no ways!” and says that dollar value of fraud is down since chip-in-card (neglecting that this isn’t about that) but allowing that losses following theft are up 35% (to only $20m/year).  And ignores all the crime that would be associated with obtaining the cards.

Programmatically create Django security groups

Django authentication has security roles and CRUD permissions baked in from the get-go, but there’s a glaring omission: those roles, or Groups, are expected to be loaded by some competent administrator post-installation.  Groups are an excellent method of assigning access control to broad roles, but they don’t seem to be a first-class concept in Django.

It seems that you can kind-of save these values in by doing an export and creating a fixture, which will automatically re-load at install time, but that’s not terribly explicit – not compared to code. And I’m not even sure if it will work.  So here’s my solution to programmatically creating Django Groups.

management.py, which is created in the same directory as your models.py and is automatically run during python manage.py syncdb:

from django.db.models import signals
from django.contrib.auth.models import Group, Permission
import models 

myappname_group_permissions = {
  "Cinema Manager": [
    "add_session",
    "delete_session",
    "change_ticket",
    "delete_ticket",         # for sales reversals
    "add_creditcard_charge", # for sales reversals
    ],
  "Ticket Seller": [
    "add_ticket",
    "add_creditcard_charge",
    ],
  "Cleaner": [ # cleaners need to record their work
    "add_cleaning",
    "change_cleaning",
    "delete_cleaning",
    ],
}

def create_user_groups(app, created_models, verbosity, **kwargs):
  if verbosity>0:
    print "Initialising data post_syncdb"
  for group in volunteer_group_permissions:
    role, created = Group.objects.get_or_create(name=group)
    if verbosity>1 and created:
      print 'Creating group', group
    for perm in myappname_group_permissions[group]: 
      role.permissions.add(Permission.objects.get(codename=perm))
      if verbosity>1:
        print 'Permitting', group, 'to', perm
    role.save()

signals.post_syncdb.connect(
  create_user_groups, 
  sender=models, # only run once the models are created
  dispatch_uid='myappname.models.create_user_groups' # This only needs to universally unique; you could also mash the keyboard
  )

And that’s it. Naturally, if the appropriate action_model permissions don’t exist there’s going to be trouble.  The code says: After syncdb is run on the models, call create_user_groups.

ANZ: The rodeo clowns of online security

For years now I’ve been… less than impressed with the ANZ bank’s concept of how a secure banking website should work. Finally they’ve taken steps to harden their site. They’ve introduced “secret questions”, like “who was your best friend in high school”, “what’s your partner’s nickname” and “what’s your nickname for your youngest child”. At last, my money is now safe from thieves who will never guess that my my partner’s nickname is Cathy, my best friend in High School was Robert, and my youngest’s nickname is Marky. Oh, darn! I accidentally disclosed the answers to those secret questions! It’s as if that information would be widely available to any thief who took the time to look me up on Facebook (don’t bother, I’m not on Facebook).

Because in providing answers to these questions the security on my account was going up, not down, I couldn’t possibly be allowed to opt-out, with dire warnings about being liable for losses if someone found out the answers. To these most basic of questions.

Most other banks have implemented two-factor authentication. Even G-mail has two-factor authentication. But not the ANZ, they’ve stepped things up a notch. They’ve eschewed two-factor, and gone for “You’ll never guess the name of my pet, which I post on Facebook all day long”.

So I took my standard defensive action: attack surface reduction and target-value minimisation. To reduce the attack surface, for each answer I mashed the keyboard – so thieves, remember my first Primary School was in the suburb of pwofkmvosffslkdflsifcmmsmclsefscdsfpsdfpefsdflsd, or something. To minimise the value of the target, I swept all the funds out of the account. What’s wrong the the technique of establishing identity by the production and examination of 100 points of identifying documents?  Why do I need to have a favourite colour?

Cathy worked for the ANZ until recently, and the day she received her final paypacket she shut the account. Hated their account with a passion, but the ANZ is incapable of paying their employees through anything other than an ANZ account. Because, you know, banking is hard.

Allow more JavaScript, maintain privacy

I’ve long regarded JavaScript in the browser to be one of the biggest security holes in web-browsing, and at the same time the Internet works less and less well without it. In 2008 Joel Spolsky made the observation that for some people the Internet is just broken:

Spolsky:   Does anybody really turn off JavaScript nowadays, and like successfully surf the Internets?

Atwood:   Yeah, I was going through my blog…

Spolsky:   It seems like half of all sites would be broken.

Which is not wrong.  Things have changed in the last five years, and now the Internet is even more broken if you’re not willing to do whatever random things the site you’re looking at tells you to, and whatever other random sites that site links off to tell you to, plus whatever their JavaScript in turn tells you to. This bugs me because it marginalizes the vulnerable (the visually impaired, specifically), and is also a gaping security hole.  And the performance drain!

Normally I rock with JavaScript disabling tools and part of my tin-foil-hat approach to the Internet, but I’m now seeing that the Internet is increasingly dependent on fat clients. I’ve seen blogging sites that come up empty, because they can’t lay out their content without client-side scripting and refuse to fall back gracefully.

So, I need finer granularity of control.  Part one is RequestPolicy for FireFox, similar to which (but not as fine-grained) is Cross-Domain Request Filter for Chrome.

The extensive tracking performed by Google, Facebook, Twitter et al gives me the willys. These particular organisations can be blocked by ShareMeNot, but the galling thing is that the ShareMeNot download page demands JavaScript to display a screenshot and a clickable graphical button – which could easily been implemented as an image with a href. What the hell is wrong with kids these days?

Anyway, here’s the base configuration for my browsers these days:

FireFox Chrome Reason
HTTPSEverywhere HTTPSEverywhere Avoid inadvertent privacy leakage
Self Destructing Cookies “Third party cookies and site data” is blocked via the browser’s Settings, manual approval of individual third party cookies. Avoid tracking; StackOverflow (for example) completely breaks without cookies
RequestPolicy Cross-Domain Request Filter for Chrome Browser security and performance, avoid tracking
NoScript NotScripts Browser security and performance, avoid tracking
AdBlock Edge Adblock Plus Ad blocking
DoNotTrackMe DoNotTrackMe Avoid tracking – use social media when you want, not all the time
Firegloves (no longer available), could replace with Blender or Blend In I’ve have had layout issues when using Firegloves and couldn’t turn it off site-by-site

Australian electoral fraud

An undamaged security cable tie

If the security cable tie isn’t pulled tight engaging the teeth, it can be pulled right off. If it was secured, it would have been damaged while being removed (with scissors).

I did scrutineering at the last Victorian state election, and apart from the shocking level of informal voting and above-the-line voting, there was another shock.

Electoral fraud – or the possibility of it.

The nice thing about living in Australia is that we take our democracy seriously, and we balance being able to prove that what the outcome was with ballot secrecy. Nobody, no level of government or industry, no individual, will know how you voted without you telling them. Yet at the same time we can have confidence that our electoral system is not being rorted; our governments change back and forth, and each time it does representatives of both sides keep a close watch on the activities of the employees of the AEC and VEC, eyeballing each individual vote and knowing that they are all distinctly different from the others in spite of being a collection of handwritten marks on a slip of paper.

To minimize the risks of ballot box tampering, at the start of voting the ballot boxes (just big cardboard boxes here in Australia) are sealed shut with serialized cable-ties. An independent somebody witnesses this when an Electoral Commission employee does this (typically the first voters who wandered into the polling station), and their details are recorded (by details, I think that means signature, but it could be actually enough to track the person down afterwards) and they sign the form that records the sealing of those particular ballot boxes.

So how come they use cable ties that can be “done up” and yet the teeth don’t engage – thus leading to an unsealed ballot box? Is it too much to ask for a cable tie with teeth on both sides?

I should have kicked up a fuss, but it was a safe booth in a safe seat, and who needs the hassle?

Anyways, the reason I relate this story is that I’ve been seeing comments along the lines of “this is the 21st century, why they hell are we using pencil and paper?”  Because, dickwads, computers don’t leave a fucking audit trail.  There’s no scrutineering of electrons.  How the hell are you meant to verify that Clive Palmer didn’t in fact get 98% of the vote?  You can’t.  Interesting that Clive Palmer owns the company that supplied all of the (suspiciously cheap) voting machines to the AEC, but that hasn’t got anything to do with it. And the cost! Pencils are 10c each, paper is about a cent a sheet.  A shitty computer is $500, and requires a bunch of electricity. “Do it on the Internet, or use smart phones!” I hear you say. No, because while nearly everyone can move a pencil around, significantly fewer can use their computer to vote. And there’s no connection between how you voted, and the counting of votes. The announced result could be anything, and there’d be absolutely no way of proving it wrong.  So, yes, computers are shiny and clearly the best way of implementing a voting system, if you want an electoral system you can’t actually trust.

Flooding with water

So, looking at properties, and a number are down on the floodplain near the local moving body of water, a river/creek.  I wonder to myself if the area is at any risk from floodwater; should I even bother looking at the area?

The council, being the government body most connected to the area, ought to know.  It doesn’t; it can’t tell me except to tell me if a specific property has a flood-overlay, which says that modelling has determined that it is at risk of a 1 in 100 year flood.

What is the 1 in 100 year flood event?

The 1 in 100 year flood event is the storm that happens on
average once every one hundred years (or a 1% chance of
occurring in any given year).

Now, that means in any given year there’s a 99% chance you’re not going to get flooded.  In 100 years, that means a 0.99100 or a 36.6% chance of not getting flooded. A 2/3 chance of having water washing through your home at some point there.  Basically, that’s a guarantee that in the next century your home will be damper than normal – because the 1 in 100 year events are calculated off historic data, not forward climate models.  And the forward models say that things are only going to get more extreme; have you noticed how 1 in 100 year events seem to happen to the same place every decade or so?

In fact, pretty much anyone you talk to – water utilities for example – will only talk about 1 in 100 events. Vital government infrastructure (stuff that has to keep operating the event of a flood disaster, like hospitals and my home) has to be above the 1 in 500 line. From what I’m told, they calculate this on a site-by-site basis rather than having a map (they’re not building a bunch of new hospitals, so it’s easier that way).  Sites aren’t rated as being 1 in 110 year, you’re either in the 100 year box or not rated at all.

The gist of what I was able to read into the subtext of the hints being passed in my conversation with a town planner specializing in flooding was: Floodplains get flooded, even in cities, even if there’s a wetlands further upriver that could absorb a sudden influx of water, even if the sides of the creek are quite steep and the channel is surprisingly broad, and even if there are barricades; If you don’t like that, don’t live there.

So I won’t.  It makes searching for a home so much easier, even if the homes out of the floodplain are more expensive and built on those annoyingly sloped hill things.

Actually, this reminds me of the 1972 Elizabeth St Floods my Mum told me about getting caught in. I would never have guessed a major street in our CBD could turn into a river – and then it happened again in 2010.

news.com.au polls rigged

A news.com.au poll over whether “football” or “soccer” was a better name for the world game resulted in 2006 votes for each.

IT’S OFFICIAL. Australia is completely split down the middle on the issue of whether to call the world’s most popular sport “soccer” or “football”.

A News.com.au reader poll which has attracted 4,012 votes at the latest count reveals that exactly 2006 people voted for football, and 2006 for soccer.

What they apparently didn’t realise was that the poll was rigged. A user posted to Reddit that he had hacked the system and ensured this and other polls came out equal.

I actually wrote a program where for each option someone voted, my program would vote once for every other option, thus maintaining a deadlock.

Every now and then, they reported on poll results as if it were actual news. After emailing them alerting them to this, they are yet to retract any of their articles.

The whole saga was blogged here.

Just in case News remove the story above, here’s a screendump. — update Wednesday 8:50pm: it has now been removed.

news.com.au poll

Internet privacy: hard work, but doable

Ever since I came across browser fingerprinting, it’s been very hard to ignore that little voice in my head that tells me they’re out to get you. I routinely rock the Internet with JavaScript and Flash disabled thanks to NoScript and the similar NotScripts on Chrome, and have, in the past, been satisfied that these precautions were enough to stop the bad people on the Internet. If my browser was dumb, it couldn’t hurt me.

I routinely leave cookies enabled because they don’t present a system security threat. There are cross-site supercookies, but they’re implemented outside of the HTML cookie world — they’re done with Flash and JavaScript, so not so much of  a problem with my configuration.  In the future I’ll be disabling third-party cookies.

Disabling third party cookies doesn’t do much good with browser fingerprinting.  I hadn’t realised how unique my browsers are. So Firefox gets FireGloves, which will work even for pages where I’ve enabled JavaScript et al. FireGloves changes HTTP request headers so that instead of my systems actual values, the most generic values found in the Internet are used instead; it can also cycle through them randomly.

Because of the interminable delay in page redirection on my grossly underspec’d netbook, I’ve added Don’t track me Google (which Chrome will download but then leads you to believe it won’t let you install, but if you click *->Tools->Extensions, then drag from the download bar onto the Extensions list will install just fine).

Because the Australian government seems increasingly intent to read my mail, I’ve gotten quite interested in preventing them doing so. Encrypted communications provide private browsing — what goes back and forth is a secret, but not who are having the conversation. The EFF’s HTTPS Everywhere (which works on Firefox, and kinda on Chrome) enforces a preference for SSL communications where available. However, in the real-world parallel to the electronic, that ensures that instead of my ISP being able to see me walk around the streets and then into glass-walled buildings, the buildings now become opaque. They still know what buildings I’ve walked into. The government wants to know what buildings I’ve walked into because… ummm… the building which has bomb-making instructions… we can prove… ummm… something. But now we’re safe! The ineptitude of the government’s censorship plans leaves me with no desire to allow random ISP and government employees to rifle through whatever-it-is-I-do-on-the-Internet whenever they feel like it.

As such, the next step is to start using an anonymising network; initially I2P seemed to be just the ticket.  I2P is an unofficial top level domain, and under it you can find — amongst other things — eepsites, anonymously hosted web sites. Problem is, they serve HTML, and the pages could refer you off the .i2p TLD thus exposing your IP address (they might do this via a web-bug or something as innocuous as externally hosted CSS file). I2P is primarily a darknet, not an anonymising proxy; it’s an internet that doesn’t play by the same rules, and the effect is that no-one on it can identify anyone else on it (with some demonstrated exceptions). The I2P network seems to be populated by scary people and paranoid people. By far the biggest problem is that I2P doesn’t work very well for surfing the Internet, due to it’s limited out-bound connection (outproxy) to the wider Internet.  Given the http://i2p.to proxy allows viewing this darknet from outside, there’s not much point running I2P unless you want to anonymously publish information.

So while I2P isn’t enough on it’s own to hide your identify online, it isn’t really enough anyway. I don’t want to wander the darknet, I want to be out in the light of the Internet using my Cloak of Invisibility.  This is where the only (non-VPN) game in town comes in, along with all its demonstrated weaknesses: Tor.  The Tor network is accessed via the TorButton plugin.

When using TorButton, to minimize your risk profile you can’t run random crap on your browser — you’ve got to just browse. As such, the Tor developers recommend you use TorButton with a bunch of other tools (many of which I’ve already mentioned), which are all helpfully bundled up into the Tor Browser bundle, a secured version of FireFox — not a plugin — that uses the Tor network.  They’re also very down on embedded environments like Flash, Sliverlight, Quicktime, RealPlayer… you get the idea.  In addition, those datafiles that carry active content — .DOC and .PDF — scare the willies out of them, and they want you to only open them once you’re disconnected from the Tor network.

In fact, they go so far as to recommend Tails running inside a VM, which means all your traffic goes via Tor.  That seems to be the optimal solution.