Category Archives: Data

Programatic submission of Australia Post’s CN23 customs form

A number of major international destinations of packages now will only accept packages with electronic CN23 customs declaration. Normally, you’d do this by rocking up to the Post Office with your pre-addressed parcel, filling in a CN23 paper form, and have that transcribed into Australia Post’s computer system by the postal worker behind the counter. You can elect to receive SMS notifications of change of status (landed, delivered, etc) for 50c.

Australia Post also allows you to fill in the appropriate details on their website; if you do this, then you get a QR code sent to you via SMS (free) and email (free) which the postal worker scans in and all the details (your name and address, destination name and address, contents, etc) are attached to your package’s details without any error-prone re-keying. The downside of going down this path is the dismal website Aussie Post provides, a JavaScript heavy, painfully slow dog of a site that doesn’t cache your own address.

Once the QR code is scanned, and the postal worker checks everything with you, they’ll print out the CN23, get you to sign it , and then it gets attached to your parcel. Because the To and From addresses are on the CN23 form (and those details are in electronic form, associated with the barcode for the package), it’s perfectly acceptable to present an unaddressed package to the post office (make sure you can tell which package is which, if you go down this route).

One thing you need to be aware of: Australia Post hasn’t heard of Unicode. You absolutely can’t use any characters not in the ASCII character set, and even then a very limited range of them. Certain fields allow some characters, which in turn aren’t allowed in other fields.

One of the fields you can supply is the HS tariff code, which is an international standard group of codes to describe “stuff” – the Harmonised System Tariff code. The sourcecode below uses the code for “Toy, plastic construction” – you should use the code for what you’re actually sending. You can specify multiple HS codes. Dollar values are in decimal dollars, weights are in decimal kilograms.

After calling the Australia Post website with your customs declaration, it returns to you a base-64 encoded PNG of the QR code to present at the counter, and a base-64 encoded PDF of the CN23 form – there’s no point printing this out, because it’s not paid for yet; let the Post Office print it out with the postage on it. You’ll also get the PNG via email and SMS (free).

Here’s some Python to make this submission:


    AP_session = requests.Session()
    jsonFormData =  {"customDeclaration":{
      "label":{"source":"AEM","postagePaidIndicator":False,"eadIndicator":False},
      "parcelCharacteristics":{
        "productClassification":11,
        "dangerousGoodsIndicator":False,
        "returnInstructions":"Return By Most Economical Route",
        "confirmationMobileNumber":"0411111111",
        "content":[{
          "content":"HS traffic code name for your stuff",
          "contentQuantity":1,
          "contentUnitValue":subtotal,
          "totalContentValue":subtotal,
          "contentWeight":int(order["total_weight"])/1000,
          "hsTariff":"95030039",
          "contentCountryOfOrigin":"DK"
          }],
        "totalConsignmentValue":subtotal},
      "senderAddress":{"firstName":"Josh","lastName":"FromGeekrant.org",
        "addressLine":["11 Example St"],"suburb":"YourSuburbName","state":"VIC",
        "postcode":"3000","email":"addr@example.com",
        "phone":"0411111111","smsConfirmation":False,"countryCode":"AU"},
      "receiverAddress":{"firstName":CustomsString(order["label_address_name_first"]),
        "lastName":CustomsString(order["label_address_name_last"]),
        "countryCode":order["label_address_two_char_country_code"],
        "addressLine":CustomsAddress(order),
        "suburb":CustomsString(order["label_address_city"]),
        "state":CustomsString(order["label_address_state"]),
        "postcode":CustomsString(order["label_address_postal_code"]),
        "email":order["buyer_email"]}
    }}

    stopact = {"jsonFormData":json.dumps(jsonFormData) }
    result = AP_session.post(url='https://auspost.com.au/bin/form/stopact', 
      data=stopact, timeout=2)
    response = json.loads(result.text)
    result.raise_for_status()
    filename = "{}-customsQRcode.png".format(orderid)
    with open(filename, "wb") as fh:
      fh.write(base64.b64decode(response['qrCode']))
    filename = "{}-CN23.pdf".format(orderid)
    with open(filename, "wb") as fh:
      fh.write(base64.b64decode(response['label']))

YahooGroups sails into the sunset – export your data now

YahooGroups is mostly shutting down.

This official notice tries to describe what’s being removed and what isn’t, but succeeds in being a bit confusing: emails to groups will keep working, but “Conversations” and “Email updates” won’t.

My reading is you’ll still be able to send emails and they’ll be distributed – but nothing else will work.

But I’d also assume this is not a business they want to be in anymore, so even emails will probably be discontinued before too long.

There’s an option to download all your data: it’s accessible via this link: https://groups.yahoo.com/neo/getmydata

It took a week for it to be ready.

Then it took 5 goes for the download to actually work. It repeatedly conked out part way through.

When it finally worked, the result (for me) is a 1.6 Gb zip file with everything from (it appears) every group I’m in, including messages, files and everything else.

Inside the subdirectories for each group is:

  • files.zip
  • links.zip
  • medias.json – this seems to be metadata for the photos and attachments, below
  • messages.zip – looks to be mbox files with the entire archive of messages
  • photos_and_attachments.zip

I can’t see some of the more esoteric data from YG, such as Databases, Polls and Events. Export of those might have to be a manual process.

If you’re interested in getting your data, I’d get onto it now. It’s all scheduled to be wiped in mid-December, and the process may become even slower than a week.

2020-01-16: The deadline was extended to the end of January 2020.

Netgear Stora upgrade v3: 2-disk-JBOD to 1-disk-JBOD

So, we’re butting heads up against the storage capacity of our Netgear Stora again (93% full). The NAS currently has 2 x 2TB drives and no more free bays to drop drives into, so whatever the next arrangement is it has to involve getting rid of at least one of the current drives. The Stora is currently backed up to an external drive enclosure with a 4TB drive mounted in it. Other things are also backed up on that external drive, so it’s more pressed for space than the Stora.

So here’s the plan:

  • collect underpants
    This was a flippant comment, but it’s upgrade season and we recently acquired a computer second hand, which had an i5-3470S CPU, the most powerful thing in the house by a significant margin. I wanted the dual Display Port outputs, but unfortunately it could only be upgraded to 8GB of RAM, so instead the CPU got swapped into our primary desktop (and a graphics card acquired to run dual digital displays). Dropping in a replacement CPU required replacing the thermal grease, and that meant a rag to wipe off the old grease, thus the underpants.
  • backup the Stora to the 4TB drive
  • acquire a cheap 8TB disk because this is for backing up, not primary storage
  • clone the 4TB drive onto it using Clonezilla
  • expand the cloned 4TB partition to the full 8TB of drive space
    Well, that didn’t work.  Clonezilla didn’t seem to copy the data reliably, but admittedly I was running a stupidly old version.  Several hours of mucking around with SATA connectors and Ubuntu NTFS drivers later, I gave up and copied the disk using Windows.  It took several days, even using USB3 HDD enclosures, which is why I spent so much time mucking around trying to avoid it.
  • backup the Stora to the 8TB drive
  • remove the 2 x 2TB drives from the Stora
  • insert the 4TB drive into the Stora
  • allow the Stora to format the 4TB drive
  • pull the 4TB drive
  • mount the 4TB and 2 x 2TB drives in a not-otherwise-busy machine
  • copy the data from the 2 x 2TB drives onto the 4TB drive
  • reinsert the 4TB drive into the Stora
  • profit!

And, by profit, I mean cascade the 2TB drives into desktop machines that have 90% full 1TB drives… further rounds of disk duplication ensue. 1TB drives then cascade to other desktop machines, further rounds of disk duplication ensue.

At the end of this process, the entire fleet will have been upgraded. But the original problem of butting heads against the Stora will not have been addressed; this will hopefully a simple matter of dropping another drive in.

The last time we did this, we paid $49.50/TB for storage.  This time around, it was $44.35; a 10% drop in storage prices isn’t anything to write home about in a four-and-a-half year window.

Trustworthy email: authentication using exim4, SPF, DKIM and DMARC

The email authentication technologies we’re about to implement are, according to the authentication authorities, all you need to be regarded as being from your domain when you send email, and someone else not being from your domain.  Effect: your emails can be considered trustworthy by email receivers who use these technologies. If they don’t use these technologies, they can’t tell.

At the very least, Google will be less likely to think your email is spam.

PTR record

A PTR record can be obtained from your host’s nameserver – it’s a reverse DNS record for your IP address. If the PTR record points at ec2-23-65-53-221.ap-southeast-2.compute.amazonaws.com rather than example.com (your domain), and you’re claiming to be sending mail from example.com, what’s the email recipient meant to think?

host 23.65.53.221

will tell you what the host for that IP is. Lodge a ticket with your hosting provider and get that PTR record changed to example.com. This might take about a day.

SPF record

Create a Sender Policy Framework record on your nameserver:

TXT @ "v=spf1 a mx -all"

This says “for my domain, I will only send email from IP addresses listed on the nameserver”.  Nameserver changes take time to propagate.

After your nameserver changes have propagated, you can go to https://dmarcian.com/spf-survey/ to check out if you got it right.

DKIM

DomainKeys Identified Mail is where things get more involved.  We’re doing this on a Debian Linux, like Ubuntu for exim4. We’re making our signing key 2048 bits, which is long enough to make life slightly unpleasant for us. Fortunately for you I’ve written a bash script that outputs the TXT record we need to create on the nameserver – because some nameservers (I’m looking at you, Gandi) can’t hold “long” strings – it’s broken into “small” strings:

sudo apt install openssl
cd /etc/exim4
sudo openssl genrsa -out dkim.private 2048
sudo openssl rsa -in dkim.private -out dkim.public -pubout -outform PEM
echo $(echo $(date -u +%Y%m%d && echo '.domainkey.example.com') | sed -e 's/[ ]//g' && echo $(echo ' TXT "v=DKIM1; p="' && echo $(grep 'PUBLIC KEY' -v dkim.public) | sed -e 's/[ ]//g' | fold -w200 | sed -e 's/\(.*\)/"\1"/g'))

which gives something like
20170419._domainkey TXT “v=DKIM1; p=” “MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAvCNqU0Njd4YQ4e89T3FNc+uyOS2JwUqynGk7uwcSYHjIE2MGRuTxi56s4JgPKSnCVlBkJlUnXQHXFp2UGnLm8SADtjRMfWwpNxz6TmzXBpMnNZV1zvuoBBdcxh0Qg1TtCSACtWM6ehml0BmOHVA8Ippqj9iRlP2HMjuVMxZXewN9eJl”
“c6zsyOwQPvVKpJ+Rdvr+pPkDztAVTw7mNSeyy+TL6O/3L9sl7A19Yx8jLHKuGUh9LutVuv1VP16e7GwlnA3Zqn5C1jyY5Qvr2SEHZMcE3VzD7XKZtZWbpkGh+A5S15NrOH4k9tbVfNbjft6Y1jUJRTT+4DD0ZEVlr4zO+WQIDAQAB”

That all goes into one nameserver TXT record, spaces and all.  The world will join up the ” ” and get one big string. Note the bold number up there? That’s the selector. That a number needs to get larger with each new key.  Periodically you’re going to have to reissue your key because security.  You know what gets larger as time goes by?  The date.  Use the date.  If you screw up, use tomorrow’s date, etc.

Once you’ve got our public key out to the public via our public nameserver, we need to get exim to sign the payloads:

sudo nano conf.d/main/01_exim4-config_listmacrosdefs

After the line CONFDIR = /etc/exim4, add:

#DKIM loading
DKIM_CANON = relaxed
DKIM_DOMAIN = ${sender_address_domain}
DKIM_PRIVATE_KEY = CONFDIR/dkim.private
DKIM_SELECTOR = 20170419

and reload the mail server

sudo service exim4 restart

After an appropriate delay for nameserver propagation, use https://protodave.com/tools/dkim-key-checker/?selector=20170419&domain=example.com to check your work.
If that works out, mailto:check-auth@verifier.port25.com from example.com to ensure everything checks out:

echo -e "Test my DKIM plz\nMsg Body\n.\n\n" | mail -v check-auth@verifier.port25.com

DMARC

Domain-based Message Authentication, Reporting and Conformance is where the wheels can come off if you screwed anything up.  We’re going to set things up so that when you screw it up, computers scold you rather than putting your emails in the bin.

You will need to create two dmarc reporting accounts.  Servers will email you a (surprisingly detailed) report card on how you’re doing with your implementation. It’s best if these accounts are on the same domain, because technically you need to be or it’ll be ignored (Google will happily mail reports off-domain even if the other domain hasn’t said that’s okay).  Yours are dmarc_failures@example.com and dmarc_summary@example.com, according to the following nameserver entry:

_dmarc.example.com. 1800 IN TXT "v=DMARC1;p=none;pct=100;ruf=mailto:dmarc_failures@example.com;rua=mailto:dmarc_summary@example.com"

none is the consequence for screwing up. none is where we’ll start at, and see what the reporting records say to us.  After a while, you’ll be comfortable that everything is ticking along nicely, and you’ll up the consequent to quarantine (shove it in spam) or reject (burn it).

After your nameserver changes have propagated, you can go to https://dmarcian.com/dmarc-inspector/ to check out if you got it right.

As a human, to read the records sent to you, upload the files to https://dmarcian.com/dmarc-xml/

How to to install the Crypt::Eksblowfish::Bcrypt module, and Crypt::Random

Have you gotten the error
Can't locate Crypt/Eksblowfish/Bcrypt.pm in @INC (you may need to install the Crypt::Eksblowfish::Bcrypt module)
and wondered what to do? Wonder no more!

sudo apt install libcrypt-eksblowfish-perl

and perhaps

sudo apt install libdigest-bcrypt-perl

What about
Can't locate Crypt/Random.pm in @INC (you may need to install the Crypt::Random module)
Easy!

sudo apt install unzip make gcc
wget http://search.cpan.org/CPAN/authors/id/I/IL/ILYAZ/modules/Math-Pari-2.01080900.zip
cd Math-Pari-2.01080900/
perl Makefile.PL
sed -i 's/CLK_TCK/CLOCKS_PER_SEC/g' pari-2.1.7/src/language/init.c
make
make test
sudo make install
cd ..
wget http://search.cpan.org/CPAN/authors/id/V/VI/VIPUL/Crypt-Random-1.25.tar.gz
tar zxvf Crypt-Random-1.25.tar.gz
cd Crypt-Rando1.25.tar
perl Makefile.PL

Easy! Only takes a few hours if you don’t know what you’re doing.

Clone to a bigger drive, and convert MBR to GPT

I wanted to partly upgrade Windows to a new drive.

Currently, Windows itself and Program Files are on C: drive, which is an SSD (which I meant to blog about in detail, but never got around to) and documents are on D: drive (which was the tricky bit of the SSD upgrade — to do it properly involves using SysPrep with an Unattend.xml configuration file that tells Windows that documents will live on D: not C:. This article describes it in detail.

Anyway that’s really irrelevant to the problem at hand, which is that D: drive had run out of space. Here’s a brief description of what I did:

  • The new drive is a 4 Tb drive, replacing a 1 Tb drive.
  • Plug the new drive in, use Clonezilla to clone the old D: onto the new drive. Following the detailed instructions, this all went pretty smoothly.
  • But… the catch is the old drive was formatted in MBR, which has a limitation of 2 Tb. For beyond that, you need GPT.
  • I looked around for tools to convert the drive. It’s easy if you’re prepared to wipe it, but I wanted to preserve the data I’d just moved across. Finding ways to do it without wiping everything was tricky, but I settled on the free version of Minitools Partition Wizard — this has an easy-to-understand interface, and did the job
  • Once that MBR is converted to GPT, you can enlarge the partition to make the whole drive available.
  • Unplug the old drive, move the new one into the same slot as the old (this is on a Mac Pro booting in Windows Bootcamp) and it works. Done!

PS. Similar exercise afterwards shuffling the OS X partition from a 320 Gb drive to the old 1 Tb. That required GParted, as it seems the GPT partition couldn’t be expanded due to a formatting issue (which GParted helpfully offered to fix as it started up) and another small 600 Mb partition being in the way — not sure what it is, but it seems to be essential for booting OS X — GParted was able to move it to the end of the disk.

Compress PDF files

Just a quick mention of a cool online tool I found…

I was about to email off a PDF (that I hadn’t created myself) to a discussion list when I noticed it was 6 Mb… which seemed a tad excessive.

Digging around I found SmallPDF, which can shrink them down. It got down to 1.2 Mb, with no noticeable loss of detail/fidelity.

SmallPDF is free for two files per hour, with no watermarks, or USD$6 a month for unlimited, and they have a few other related PDF functions such as file conversions.

Worth a look if you need to do something like this.

Summer 2014/2015 starts

Saturday 18 October             Max 26    Partly cloudy.
Sunday 19 October     Min 17    Max 29    Afternoon cool change.
Monday 20 October     Min 12    Max 23    Mostly sunny.
Tuesday 21 October    Min 10    Max 28    Sunny.
Wednesday 22 October  Min 16    Max 31    Possible shower.
Thursday 23 October   Min 15    Max 20    Possible shower.
Friday 24 October     Min 13    Max 22    Cloudy.

Winter 2014 ends

I’ve tried to use the same technique to determine winter the same way I do summer; I decided back in June that winter started. And in August, it’s over.

Tuesday   19 August Max 15 Possible light shower.
Wednesday 20 August Min 7 Max 15 Cloudy.
Thursday  21 August Min 5 Max 18 Mostly sunny.
Friday    22 August Min 6 Max 19 Mostly sunny.
Saturday  23 August Min 7 Max 19 Mostly sunny.
Sunday    24 August Min 8 Max 18 Mostly sunny.
Monday    25 August Min 6 Max 18 Mostly sunny.

A couple of days later the forecast was extended out to

Tuesday   26 August  Min 8 Max 19 Partly cloudy.
Wednesday 27 August  Min 9 Max 21 Mostly sunny.

Winter 2014 starts

So I’m trying to declare Winter. I’m going to try something like Summer, but with a 16 degree ceiling, which we just hit here in Melbourne.

Monday    16 June             Max 16    Rain at times, easing.
Tuesday   17 June    Min 10   Max 16    Partly cloudy.
Wednesday 18 June    Min 8    Max 16    Mostly cloudy.
Thursday  19 June    Min 8    Max 16    Partly cloudy.
Friday    20 June    Min 10   Max 15    Shower or two developing.
Saturday  21 June    Min 9    Max 15    Morning shower or two.
Sunday    22 June    Min 9    Max 16    Partly cloudy.

I also offer the observation that you know it’s Winter when it doesn’t feel cold anymore.

Note: The 15 degree ceiling was hit on Friday 4 July 2014, 14 degree on Wednesday 9 July..

Summer 2014 ends

Four days before the start of Winter, I’ve declared the end (our second) Summer:

Wednesday 28 May              Max 18    Shower or two.
Thursday  29 May    Min 10    Max 19    Partly cloudy.
Friday    30 May    Min 10    Max 19    Partly cloudy.
Saturday  31 May    Min 10    Max 19    A little rain developing.
Sunday     1 June   Min 10    Max 17    A few showers.
Monday     2 June   Min 12    Max 17    Shower or two.
Tuesday    3 June   Min 10    Max 18    Morning shower or two.