Random thoughts about Sylph

March 4, 2015

Some random thoughts on Sylph: the programming language I want.

My own wishlist:

Build/configuration settings can be bad news.  I think this is mostly a problem for older, crusty compiled languages like C and C++, but there are interpreters that are just as bad (cough, PHP).  If we really can’t do without configuration settings, they should at least be standardized and located in the source code rather than something separate you have to know how to throw at each particular compiler or interpreter.

I’d like the same reasoning to apply to makefiles, too; rather than being something external, make them part of the language.  If some source only applies to certain platforms, that information can be in the source.

Java comes close in that all you have to tell the compiler is which source file to start with.  I’d like to get rid of even that, though, so that all you need to do is point the compiler or interpreter at the folder containing the source code, and let it take things from there.  I also don’t like that Java requires the source have a particular directory structure.  Ideally, the directory structure wouldn’t matter; heck, ideally you could append all of the source code files, in any order, into a single file and it would still have the same meaning.  Don’t know if that’s realistic.  I guess if a single build produces multiple applications, you’ll need to tell the interpreter which one to run – but if nothing else it could be a selection from a menu rather than a freeform guess.

I might prefer a family of languages, with certain properties in common and in particular the ability to mix them freely.  In particular a special language for database operations, probably not much like SQL – but the key idea here is that you can put the database statements right in the middle of a function written in a regular language, rather than having to build a string or mess about with parameterization functions.

I’m ambivalent about function overloading, but if I do have it, I want it to be very obvious to the programmer which function you’re calling at any given time.  In particular, if the types don’t match one of the overloads exactly, I want to have to convert them myself, not have the language guess what I meant.  Non-overloaded functions can still do automatic conversion where appropriate.

As for the cases where function overloading is all about converting types according to certain rules, e.g., x, y -> Point(x,y), it almost seems that what you want is for the function to describe how the compiler/interpreter should parse calls to it?  I’m not sure what that would look like, but it sounds as if it could be generalized into something very powerful.  (Perhaps too powerful.  Don’t want to get seduced by the Dark Side here.)  On the other hand, if someone can make something sensible out of this, the same mechanism might also be able to provide a much safer replacement for C macros.

On the third hand, if we stick to the original idea, it could be as simple as

void draw_point(p:Point or p:new Point(x:int, y:int), c:Color)

perhaps?

This one is implicit, I think, in some of the original post’s ideas, but it should be possible to include code in a program that runs at compile-time.  Static initializers would do it by default, but there should also be an explicit way of saying “this segment of code here?  run it now.”

Safety/Memory Management: I’m thinking by default you can’t get a pointer to a variable; only if has been declared in a way compatible with pointerhood.  Perhaps you have special container types that allow you to have pointers to objects inside them; the lifetime of the object, unless explicitly deleted, is the lifetime of the container.  Pointers to a deleted object automatically go null, or raise an exception if the pointer type doesn’t allow null.  Or something.  Maybe different container types have different rules – this one is reference counted, this one is explicit-delete only.  There should perhaps be an escape hatch.

Indentation: I’m not sure about getting rid of braces, perhaps because I’m old enough to think that sometimes I might have to type in code from a printed or handwritten page, image or the like, rather than always having access to the original source files and/or being able to copy and paste.  I would like the language to require that the braces and the indentation match up, though.  Perhaps that means an IDE could just not show you the braces if you don’t want to see them?

Also, there are situations where even reading the code might be awkward.  If you’re six or seven indentations deep, and then drop out of several at once, it might not be clear.  “Was that the end of three blocks or four?  Which of the blocks two pages up lines up?”

Classes: if you’re willing to require complete source code (or a source-equivalent, like bytecode) then instead of using inheritance to modify the behaviour of a type, you could have a language construct that says “hey, make me a new type with all the same code as the old type, except for these changes”.  Fragile if the upstream code changes, of course, but no more so than inheritance.  Plus, from the compiler’s view the two types are unrelated, making everything simpler.

Traits: similarly, you can eliminate dispatch complications if the compiler can generate as many copies of a function, with different types, as it decides it needs.

Exceptions: for the simplest cases, how about something like

value = some_dict[key] or default_value

… although that assumes there’s only one kind of (acceptable) failure, and the function knows what it is.  It also doesn’t deal with your example where you want to add the key if it doesn’t exist, though I suppose that might just be

value = some_dict[key] or (some_dict[key] = default_value)

so long as you’re happy about assignment being an operator.  That could also be implemented much more efficiently than a real exception, it’s just a hidden extra return value.

Operator precedence: except for a (relatively) few common, well-understood cases, just don’t have any precedence; require the programmer to use brackets.  That does mean the compiler would have to know which operators were associative, though; I don’t want to have to say (a+b)+c or a+(b+c) unless they’re actually different.  (But if they are different, then I do want the compiler to remind me of the fact by forcing me to include the parenthesis.)

Concurrency:

Typically, when using threading for I/O, you don’t really need the “threads” to run simultaneously.  Give them a different name (fibers, perhaps, ala Windows) and have them run one at a time, switching between them only when they do a wait; that makes it a lot easier to reason about thread safety.

You’d still have to separate out the I/O code from the ordinary code, though.  Also, it would be nice if fibers didn’t need their own stacks, but that means the code has to be very flat; basically any time you want to call an async function the compiler has to be able to inline it.  That might be too restrictive, though I think it’s worth a try.

The “Python problem” can be solved by having both threads and fibers.  Fibers would be for I/O and similar tasks, and could share memory ownership; threads would be for concurrent processing, and no shared memory except via safe functions (and escape hatches, I guess).  Still doesn’t solve all the problems.

Standard library: some of these problems will go away if the compiler has source code for the standard library (apart from the native primitives, I guess) and has good support for trimming away unused code.  You still need some kind of versioning.

Optimization: speaking of trimming away unused code, that’s another case where functions need to be able to define their syntax and/or where some of the code needs to run at compile time.  So that if you’re using a function analogous to printf, it can work out at compile time that you aren’t using the floating-point bits and throw them away.  (Well, provided the format string is a constant, but that seems like a reasonable constraint.)

You could something like that along the lines of preprocessor directives, except that it’s per-invocation:

do_stuff(x:int, fred:option, greg:option):
  if option fred
    # do fred stuff
  if option greg
    # do greg stuff

and then if you ever call do_stuff(x, fred) the compiler includes the version of do_stuff with the fred option in the executable, and if you don’t it doesn’t.

That’s all I got.

Robocopy can silently fail to copy directories with invalid UTF-16 names (or, why I always compare after copying)

December 11, 2014

Over the last few days I’ve been copying the bulk of the home directories on my primary file server over to a new volume (don’t ask) and, of course, I did a comparison afterwards to make sure the copy was successful [1].  I’m talking about 3070 home directories, comprising over seven million files structured in any number of strange and wonderful ways.

I wasn’t at all surprised to find that 3069 of those directories had copied perfectly; robocopy is pretty reliable.

I was a little surprised to find that one directory had an anomaly, but still, glitches happen.  I became puzzled, though, when I realized what the problem was: an entire subdirectory was missing.  Robocopy hadn’t reported any errors.  What’s more, when I ran robocopy over that home directory again, it reported that there was nothing to do: as far as it was concerned, source and destination were a perfect match.

Explorer didn’t show me much.  The name of the two directories in the source looked the same; the first character was shown as a box.  Another little tool of mine, though, could see the difference:

.\%d898Adobe\
.\%dabdAdobe\

The tool escapes non-ASCII characters with a percent sign followed by a hexadecimal representation, so the first wide character is 0xD898 in the first directory and 0xDADB in the second.  Otherwise the names are the same.  Only the first one was present in the destination.

The next step, obviously, was to look up the Unicode code points 0xD898 and 0xDADB.  As it turns out, they are “high-surrogate code points”, used in UTF-16 to encode Unicode code points larger than 16 bits.  The key here is that surrogate code points are only valid in pairs: an individual surrogate code point is meaningless.

Of course, NTFS doesn’t care.  It doesn’t really understand Unicode, so one 16-bit character is much like another.  As far as NTFS is concerned, those are perfectly good (and distinct) names.  Robocopy, however, must for some reason be converting or normalizing the UTF-16 strings, and as a result it sees those two names as identical.  (It appears to be ignoring the second occurrence of the “same” name in a single directory; it doesn’t attempt to copy the second subdirectory at all.)

So, if you’re in the habit of creating files with invalid UTF-16 names, be warned. 🙂

[1] Using some code I wrote myself.  Microsoft don’t seem to have provided a reliable directory-level comparison tool, and I’m not aware of any existing third-party solutions.  I should open-source that tool one day.

Same-Sex Marriage and the Electric Car

October 24, 2014

This was originally posted in the commentary to this post by Ilya SominIt has been expanded slightly.

Not very long ago all cars had internal combustion engines. If you asked someone to define a car, they might well have said something like, “a passenger vehicle using an internal combustion engine”. The engine was part of the concept of a car.

Then someone was inconsiderate enough to design a car-like vehicle that didn’t have an internal combustion engine; it had an electric motor instead. Clearly not a car by the traditional definition, so we would expect that it would be given a new name: we might call it a “civil vehicle”, perhaps, if you’ll forgive the snarkiness. (“Wireless tram” would be a more neutral option, if you would prefer it.)

But we didn’t. We recognized pretty much immediately that, while this vehicle was different from the cars we were used to, the differences weren’t really relevant: the new vehicle was, to all intents and purposes, just a different *kind* of car. It wasn’t necessary to give it a new name, or to make new road rules for it. Our conception, our *definition* of what it meant for something to be a car changed seamlessly.

In 1950, the idea of an electric car would sound outlandish; I would anticipate amusing comments about the length of power cords.  Now it seems normal, and it would be surprising to find someone who refused to accept that an electric car was, in fact, a car.

That’s pretty much how SSM feels, I think, to most people that have no objections to homosexuality per se. I’m 46 years old; when I was growing up, it was widely accepted that marriage was between a man and a woman, and I accepted that without ever thinking about it. Then, around 2004, SSM became a political issue. I *still* didn’t need to think about it, or not much: once the idea was presented to me, I recognized instantly that it was just another kind of marriage, just as I recognized instantly that an electric car was just another kind of car.

(We didn’t get SSM right away, of course; the religious community was too strong at the time, and we would up with civil unions as a sort of compromise.  SSM was legalized in New Zealand 19 August, 2013.)

At any rate, including SSM under my definition of marriage wasn’t and isn’t a “strategy”. It has quite simply always seemed obviously correct, and I struggle to understand why anyone would disagree.  This sometimes leads me to assume that opponents have hidden motives, presumably either religious or prejudiced (or perhaps just a general opposition to any sort of social change) though I should probably avoid jumping to that conclusion.  There may well be some atheist, unprejudiced opposition to SSM, and the fact that I can’t recall seeing any off the top of my head isn’t particularly significant; it probably wouldn’t be particularly visible above the general noise level. 🙂

Thoughts?

Whateverize is always a word

August 24, 2014

Stack Exchange is well-known for producing the occasional true work of art, in addition to the more pedestrian (but often useful) content that is its more stable fare.  This answer by Tom Christiansen is one of them:

Yes, of course versionize is a “real word” — and no disparaging remarks about its parentage should be made in polite company.

This is because ‑ize is a productive suffix in English that’s used to produce a new verb from various nouns and adjectives. That means that any word derived by combining an existing one of those using ‑ize is automatically also a “real word”.

This remains true under all conditions:

  • The result is still a “real word” even if you cannot find that word in any dictionary howsoever complete, abstruse, current, or hip said dictionary should happen to be.
  • The result is still a “real word” even if Google despite its omniscience cannot find that word anywhere.
  • The result is still a “real word” even if nobody but nobody in the entire world has ever before used that word. […]

Read the full answer on English Language & Usage Stack Exchange.

This post is licensed under the cc-wiki license as per Stack Exchange’s content licensing terms.

 

Why does OpenVAS report CVE-2003-0042 when my server isn’t running Tomcat?

August 18, 2014

Update: the plugin that was producing the false positive has been removed and the vulnerability rolled into plugin OID 1.3.6.1.4.1.25623.1.0.53322.  Thanks to Micha (Michael Meyer) and the rest of the developer team for addressing this problem so promptly!

(Also posted here.)

OpenVAS reports that an Apache virtual host is vulnerable to CVE-2003-0042, which is a vulnerability in versions of Tomcat prior to 3.3.1a. The host is not running Tomcat.

The detection OID is 1.3.6.1.4.1.25623.1.0.11438.

Why is this vulnerability detected and how can I fix it? Is it a false positive?

***

Yes, it may be a false positive.

CVE-2003-0042 was caused when a GET request contained an embedded nul character, and made it possible to list directories that should not be listable and to obtain source code for JSP files. The OpenVAS test for this vulnerability sends a request for the site’s home page, and if this does not produce a directory listing, it sends another request containing an embedded nul. If the malformed request returns a directory listing when the original request did not, the site is assumed to be vulnerable.

However, modern versions of Apache respond to the nul character in the malformed GET request by discarding the remainder of the line. This includes the information about which virtual host the request is for, so the request is parsed in the context of the default virtual host.

As a result, if the home page of the default virtual host generates a directory listing but the home page of the virtual host being scanned does not, a false positive for CVE-2003-0042 is generated.

This false positive can be prevented by placing an index.html file on the home page of the default virtual host, so that a directory listing is not generated.

Is a process running as SYSTEM running in kernel mode?

August 5, 2014

[This answer rescued from this closed Stack Overflow question.]

No. The system process is a special case, but all other processes are run in user mode, even if they are running in SYSTEM context.

Each user-mode process has its own address space. The kernel has a separate address space, accessible only to kernel-mode code. Most threads in a user-mode process run in both user mode (when running code from the process) and kernel mode (when running code from the kernel).

A thread may enter kernel mode as the result of a call to a Windows API function, or because of an external event: when a device driver needs to process an interrupt or DPC, the code runs in the context of whichever thread happens to be active at the time. This avoids the overhead of a context switch, but means that such code has to be able to run in an arbitrary context.

(Kernel-mode code can bypass the security model, but has to be careful not to leak this access out to the user-mode process that it is running in. For example, if kernel-mode code running in the context of an arbitrary thread opens a handle, it has to mark it as a kernel-only handle; otherwise, the user mode process could gain access to it.)

The System process is a special case; its threads run only in kernel mode. This allows device drivers and the kernel to do background processing that is not directly in response to an external event. It is also possible for a device driver to create a kernel-mode thread in a user-mode process.

Although they are still running in user-mode, processes running as SYSTEM are given privileges that are not (in the default configuration) given to processes running in an administrative context. For example, they have SeTcbPrivilege (“act as part of the operating system”) which allows them to do things like using SetTokenInformation to change the Remote Desktop session associated with a security token.

Adobe Creative Suite 6 Installs Very Slowly

March 1, 2013

[Additional 2014-01-10: the same issue occurs with Adobe Creative Cloud and the CCP.  In this case the outbound attempts are on both port 80 and port 443, so you’ll need two rules.]

Here’s the scenario: when using AAMEE (the “Adobe Application Manager Enterprise Edition”) to install Adobe Creative Suite 6 on a machine [1] on an isolated LAN (i.e., a machine with no direct connection to the internet) it takes a very long time to install.  A very, very long time.  During most of this time CPU and disk activity are minimal.

It turns out that the installer is trying to contact various web sites on the internet, including crl.verisign.net, presumably in order to see whether any of the digital certificates on files in the install media have been revoked.  When the attempt times out, the installer continues, but it makes many such attempts.

The workaround I recommend is to install an outbound Windows Firewall rule blocking web traffic.  Windows then instantly fails any attempt to contact a web site.  You can do this from an administrative command line like this (split for readability):

netsh advfirewall firewall add rule name="block www" dir=out 
    action=block protocol=tcp remoteport=80

To remove the rule later, this is the command:

netsh advfirewall firewall delete rule name="block www"

So how much time does this save?  Well, on one of our new machines, the installer takes 30 minutes if the firewall rule is in place.  Without it, it takes five hours.

I’ll be raising this with Adobe and perhaps, if we’re lucky, the next version won’t be quite so vigorous in its efforts to check the digital signatures.  In the meantime, you may find this approach useful.

[1] The same problem may exist when using the regular installer; I haven’t checked.

Forcing Windows to identify a special-purpose network

February 3, 2013

One problem that comes up moderately frequently when dealing with Windows servers is that the Network Location Awareness service (NLA) doesn’t allow you to assign a particular adapter to a particular network.  On ordinary networks NLA seems to do a reasonable job, but typically it can’t cope with special-purpose networks such as SANs or point-to-point links; these are all lumped together as the “Unidentified Network” which by default is contained in the Public network profile.

Why is this a problem?  Because the Windows Firewall is configured on a per-profile basis.  That’s fine if Windows Firewall doesn’t interfere with whatever you’re doing on the special-purpose network.  Unfortunately, sometimes it does.

If you search the internet, you’ll find a number of scripts which change which network profile the “Unidentified Network” is put into.  You can also do this with group policy.  This means you can make unidentified networks Private, and turn Windows Firewall off (or set relaxed rules) for Private networks.  Is this a solution?  Not really, because it doesn’t just affect your special-purpose network, it affects every unidentified network from now on.  So if, for any reason, NLA ever fails to identify the network associated with your primary internet connection, your firewall will go down.  I’m not happy about that.

I’ve found a possible workaround.  IMPORTANT: I haven’t tested this thoroughly, and at the moment I’m not planning to use it on my production server (or at least only if every other option fails).  I’ve only tried it on Windows 2012 although I suspect it will work on older versions as well.  It is entirely possible that it only works for certain ethernet drivers.  Try this only at your own risk and please take proper and adequate precautions.

In addition to adapter settings such as the DNS prefix, NLA uses the default gateway’s MAC address to uniquely identify the network.  Special-purpose networks don’t generally have a default gateway; if yours does, you probably don’t have this problem in the first place!  The idea is to create a fictitious default gateway, with suitable parameters, and the trick is that you have to give the fictitious gateway the same ethernet address as the local network adapter on the special-purpose network in question.  If you give it a make-believe ethernet address, it won’t work; you could instead give it the address of another machine on the same network, but then it won’t work if that machine is ever off-line.

(If your special network is a point-to-point link, you might instead prefer to specify the actual IP address at the other end of the link as the default gateway, if you don’t mind that NLA will see it as a new network if the ethernet address ever changes.)

So, for example: when I use get-netadapter in PowerShell I see the following results:

PS C:\Users\Administrator> get-netadapter

Name                      InterfaceDescription                    ifIndex Status       MacAddress             LinkSpeed
----                      --------------------                    ------- ------       ----------             ---------
Ethernet 2                Broadcom BCM5709C NetXtreme II Gi...#42      13 Disconnected 00-10-18-EC-7F-84          0 bps
SAN 2                     Broadcom BCM5716C NetXtreme II Gi...#41      15 Up           08-9E-01-39-53-AB         1 Gbps
SAN 1                     Broadcom BCM5709C NetXtreme II Gi...#43      12 Up           00-10-18-EC-7F-86         1 Gbps
UoW                       Broadcom BCM5716C NetXtreme II Gi...#40      14 Up           08-9E-01-39-53-AA         1 Gbps

In order to assign a specific network to the third adapter (interface index 12) I would use the following commands:

new-netroute 0.0.0.0/0 -interfaceindex 12 -nexthop 192.0.2.1 -publish no -routemetric 9999
new-netneighbor 192.0.2.1 -interfaceindex 12 -linklayeraddress 001018ec7f86 -state permanent

The first command assigns the default gateway for the interface.  I chose IP address 192.0.2.1 because it is reserved and will never be used by a real device; I suggest you do the same.  We don’t want this route to be published, and we set the metric to 9999 so that it won’t ever be used.  (The system uses the default gateway with the lowest metric.)

The second command assigns the fictitious IP address an ethernet address; as discussed before, we use the same ethernet address as the adapter we’re assigning it to.  Note that the ethernet address must be entered in a different format to that in which it is displayed; just remove the hyphens.  We make the mapping permanent.  (You can use the remove-netneighbor command if you want to remove it later.)

I hope this is helpful.  If you do try it, please let me know how it goes.

Fictitious Charges Don’t Cause Torque: Mansuripur’s Paradox

February 3, 2013

There’s been some talk lately about Mansuripur’s Paradox, e.g., see Slashdot.

For those not interested in the fine detail, there’s a very simple explanation as to why there isn’t any real paradox involved.  I’m not sure whether the debate is significant for electrical engineers; it may well be true, as Mansuripur suggests, that the Einstein-Laub equations are more appropriate than the Lorentz law for the purposes of electrical engineering.  (I have no opinion on that question.)  What should be pointed out, though, is that from a fundamental physics point of view there’s really nothing at all to see here.  (I believe that Mansuripur understands this [1], but I’m not at all sure that the journalists do!)

Let’s start with a quote from one of the articles (it looks like the paper is a bit more subtle, but the upshot is might be [2] the same): “Now imagine how things look from a “moving frame of reference” in which the charge and magnet both glide by at a steady speed. Thanks to the weird effects of relativity, the magnet appears to have more positive charge on one side and more negative charge on the other.”

Now, it’s true that there’s an electric field, and for some purposes it may be convenient to imagine that this is due to charges on either side of the magnet. But these charges are fictitious. They aren’t really there, as can be easily shown by observing that charge is a scalar, and hence the charge distribution in the magnet cannot be dependent on the frame of reference. Since they aren’t there, it’s hardly surprising that the external electric field doesn’t apply a force to them.

So, basically, a fiction that happens to be convenient in electric engineering is incompatible with relativity; or, if you prefer, in order to make fictitious charges compatible with relativity you also have to either have fictitious angular momentum, or modify the Lorentz force law.  As far as fundamental physics is concerned, this is not a paradox.

Update:

[1] I may be wrong about this; see comments to my question on Stack Exchange.

[2] The comments and linked question also suggest that I might have misunderstood the source of the supposed torque in the original paper.  There’s still nothing indicating any evidence of a real paradox.  I’ll update again if I learn anything new.

Is POLi safe?

December 28, 2012

Short answer: No.

Long answer: Hell, no.

BNZ (link here) and ASB (link here) have both recently reported that POLi have been spoofing their respective internet banking sites in order to process payments, meaning that banking passwords, any other applicable authentication information, and private banking information have been passing through POLi’s servers when POLi is used.

The banks have warned customers not to use POLi, although BNZ seems to be sending some mixed messages.

Looking at POLi’s terms and conditions there are some major warning signs.  The disclaimer of liability is probably unavoidable (though still not acceptable IMO; see below) but terms like “You will not monitor or alter the execution of POLi™ using tools external to POLi™” are neither.  They want us to trust that their software is safe to use, but they don’t want anyone to check on what it’s actually doing?  Yeah, right.

The POLi client is basically, from what I can gather, a special-purpose web browser.  While that limits exposure to security bugs, it doesn’t eliminate it, so it is also worrying that I can’t find any security bug reports either on major third party sites such as Secunia or on POLi’s own web site.  There should be at least the occasional report that “someone found a bug and we’ve fixed it” and the absence of these suggests that it really hasn’t had enough attention from the white hats.  The alternative is that POLi have figured out a way to write software without bugs; that’s basically the Holy Grail of modern computing, and if they had the secret of perfect software they’d all be fabulously wealthy and retired on private Hawaiian islands by now!

The real killer, though, is POLi’s own response to these claims (PDF).  Most importantly, the part where they deny that “POLi is spoofing/mirroring the ASB website” and claim that, instead, “POLi is providing a pass through service whereby the bank sites are accessed via our secure servers.”

Uh, hello?  Those two sentences mean the exact same thing.

POLi say they aren’t capturing customer’s authentication or other private information.  Well, good for them.  But they could.  Their software allows them to do it.  (It pretty much has to; otherwise there would be no way for the merchant to know they had been paid.)  That means it also allows anyone who manages to hack into their servers to do it.  This article on ZDNet lists some of the companies whose secure systems were breached this year: Symantec, Amazon.com-owned Zappos, Stratfor, Global Payments, LinkedIn, Yahoo – even the Chinese Government, for heaven’s sake.  Are POLi really so arrogant in the light of all this that they think their security is impenetrable?

Well, of course, they probably don’t think that.  They just want us to.

They also offer to let the banks audit the software.  Kind of pointless, really; since the software allows POLi’s servers to spoof the banking sites (oh, sorry, “provide a pass through service”) it has failed any credible audit in advance.  Any audit of the servers themselves would be good only on the day it was performed, at best.

I’m also amused by POLi’s claim on their web site (link here) that “Your confidential information is not disclosed to any third party, including us!” which I can only assume is based on a creative definition of the word “us” which excludes their servers.  True enough, the information probably doesn’t leave their secure servers and is probably deleted as soon as the transaction is complete, but that doesn’t mean that it isn’t being “disclosed” to “us” – not by any reasonable definition of those two words, at any rate.

They also say that “POLi checks the bank website’s SSL certificate and thumbprints to always ensure you are talking directly to your bank.”  So which is it, exactly?  Directly to your bank like the FAQ says or via a pass-through service like the announcement says?  These are mutually exclusive possibilities, so it has to be one or the other, and either way I’m not exactly filled with confidence.

Besides, in practical terms it doesn’t matter how good POLi’s security is.  Yours isn’t [1] because today’s consumer operating systems are still based on old designs which did not have security in mind.  If your computer becomes infected a hacker could easily modify the POLi client to behave maliciously.

Of course, said hacker could also modify your web browser to behave maliciously.  The difference is that if that happens, BNZ, at least, will cover your losses.  It isn’t clear that they will if POLi is involved, and POLi definitely won’t.

Until and unless your bank makes a public statement that they will cover POLi-related losses, don’t use it.  Just don’t.  Uninstall the client if you have it installed.  Ask your merchant to provide an alternative, or, if applicable, choose a different merchant. For example, both Ascent and Mighty Ape NZ [2] accept internet banking payments without needing any special client software, although granted you then have to wait for the payment to go through before they will ship the goods.

A small price to pay, I think.

Harry.

[1] To minimize your risks, make sure you use a standard user account (not an admin account) for your everyday activities, and use a different standard user account for your internet banking (and nothing else).  Better still, get a live DVD (a DVD which you can boot to, containing a simple operating system) and use that for internet banking.  This doesn’t change anything I’ve written here.  Both of these approaches are much better than nothing, but neither is foolproof.

[2] I have no association with either company except as a satisfied customer.