Edit 18 January: it would seem this isn’t true.
Robocopy can silently fail to copy directories with invalid UTF-16 names (or, why I always compare after copying)December 11, 2014
Over the last few days I’ve been copying the bulk of the home directories on my primary file server over to a new volume (don’t ask) and, of course, I did a comparison afterwards to make sure the copy was successful . I’m talking about 3070 home directories, comprising over seven million files structured in any number of strange and wonderful ways.
I wasn’t at all surprised to find that 3069 of those directories had copied perfectly; robocopy is pretty reliable.
I was a little surprised to find that one directory had an anomaly, but still, glitches happen. I became puzzled, though, when I realized what the problem was: an entire subdirectory was missing. Robocopy hadn’t reported any errors. What’s more, when I ran robocopy over that home directory again, it reported that there was nothing to do: as far as it was concerned, source and destination were a perfect match.
Explorer didn’t show me much. The name of the two directories in the source looked the same; the first character was shown as a box. Another little tool of mine, though, could see the difference:
The tool escapes non-ASCII characters with a percent sign followed by a hexadecimal representation, so the first wide character is 0xD898 in the first directory and 0xDADB in the second. Otherwise the names are the same. Only the first one was present in the destination.
The next step, obviously, was to look up the Unicode code points 0xD898 and 0xDADB. As it turns out, they are “high-surrogate code points”, used in UTF-16 to encode Unicode code points larger than 16 bits. The key here is that surrogate code points are only valid in pairs: an individual surrogate code point is meaningless.
Of course, NTFS doesn’t care. It doesn’t really understand Unicode, so one 16-bit character is much like another. As far as NTFS is concerned, those are perfectly good (and distinct) names. Robocopy, however, must for some reason be converting or normalizing the UTF-16 strings, and as a result it sees those two names as identical. (It appears to be ignoring the second occurrence of the “same” name in a single directory; it doesn’t attempt to copy the second subdirectory at all.)
So, if you’re in the habit of creating files with invalid UTF-16 names, be warned. :-)
 Using some code I wrote myself. Microsoft don’t seem to have provided a reliable directory-level comparison tool, and I’m not aware of any existing third-party solutions. I should open-source that tool one day.
Not very long ago all cars had internal combustion engines. If you asked someone to define a car, they might well have said something like, “a passenger vehicle using an internal combustion engine”. The engine was part of the concept of a car.
Then someone was inconsiderate enough to design a car-like vehicle that didn’t have an internal combustion engine; it had an electric motor instead. Clearly not a car by the traditional definition, so we would expect that it would be given a new name: we might call it a “civil vehicle”, perhaps, if you’ll forgive the snarkiness. (“Wireless tram” would be a more neutral option, if you would prefer it.)
But we didn’t. We recognized pretty much immediately that, while this vehicle was different from the cars we were used to, the differences weren’t really relevant: the new vehicle was, to all intents and purposes, just a different *kind* of car. It wasn’t necessary to give it a new name, or to make new road rules for it. Our conception, our *definition* of what it meant for something to be a car changed seamlessly.
In 1950, the idea of an electric car would sound outlandish; I would anticipate amusing comments about the length of power cords. Now it seems normal, and it would be surprising to find someone who refused to accept that an electric car was, in fact, a car.
That’s pretty much how SSM feels, I think, to most people that have no objections to homosexuality per se. I’m 46 years old; when I was growing up, it was widely accepted that marriage was between a man and a woman, and I accepted that without ever thinking about it. Then, around 2004, SSM became a political issue. I *still* didn’t need to think about it, or not much: once the idea was presented to me, I recognized instantly that it was just another kind of marriage, just as I recognized instantly that an electric car was just another kind of car.
(We didn’t get SSM right away, of course; the religious community was too strong at the time, and we would up with civil unions as a sort of compromise. SSM was legalized in New Zealand 19 August, 2013.)
At any rate, including SSM under my definition of marriage wasn’t and isn’t a “strategy”. It has quite simply always seemed obviously correct, and I struggle to understand why anyone would disagree. This sometimes leads me to assume that opponents have hidden motives, presumably either religious or prejudiced (or perhaps just a general opposition to any sort of social change) though I should probably avoid jumping to that conclusion. There may well be some atheist, unprejudiced opposition to SSM, and the fact that I can’t recall seeing any off the top of my head isn’t particularly significant; it probably wouldn’t be particularly visible above the general noise level. :-)
Stack Exchange is well-known for producing the occasional true work of art, in addition to the more pedestrian (but often useful) content that is its more stable fare. This answer by Tom Christiansen is one of them:
Yes, of course versionize is a “real word” — and no disparaging remarks about its parentage should be made in polite company.
This is because ‑ize is a productive suffix in English that’s used to produce a new verb from various nouns and adjectives. That means that any word derived by combining an existing one of those using ‑ize is automatically also a “real word”.
This remains true under all conditions:
- The result is still a “real word” even if you cannot find that word in any dictionary howsoever complete, abstruse, current, or hip said dictionary should happen to be.
- The result is still a “real word” even if Google despite its omniscience cannot find that word anywhere.
- The result is still a “real word” even if nobody but nobody in the entire world has ever before used that word. […]
This post is licensed under the cc-wiki license as per Stack Exchange’s content licensing terms.
Update: the plugin that was producing the false positive has been removed and the vulnerability rolled into plugin OID 220.127.116.11.4.1.25618.104.22.168322. Thanks to Micha (Michael Meyer) and the rest of the developer team for addressing this problem so promptly!
(Also posted here.)
OpenVAS reports that an Apache virtual host is vulnerable to CVE-2003-0042, which is a vulnerability in versions of Tomcat prior to 3.3.1a. The host is not running Tomcat.
The detection OID is 22.214.171.124.4.1.256126.96.36.19938.
Why is this vulnerability detected and how can I fix it? Is it a false positive?
Yes, it may be a false positive.
CVE-2003-0042 was caused when a GET request contained an embedded nul character, and made it possible to list directories that should not be listable and to obtain source code for JSP files. The OpenVAS test for this vulnerability sends a request for the site’s home page, and if this does not produce a directory listing, it sends another request containing an embedded nul. If the malformed request returns a directory listing when the original request did not, the site is assumed to be vulnerable.
However, modern versions of Apache respond to the nul character in the malformed GET request by discarding the remainder of the line. This includes the information about which virtual host the request is for, so the request is parsed in the context of the default virtual host.
As a result, if the home page of the default virtual host generates a directory listing but the home page of the virtual host being scanned does not, a false positive for CVE-2003-0042 is generated.
This false positive can be prevented by placing an index.html file on the home page of the default virtual host, so that a directory listing is not generated.
[This answer rescued from this closed Stack Overflow question.]
No. The system process is a special case, but all other processes are run in user mode, even if they are running in SYSTEM context.
Each user-mode process has its own address space. The kernel has a separate address space, accessible only to kernel-mode code. Most threads in a user-mode process run in both user mode (when running code from the process) and kernel mode (when running code from the kernel).
A thread may enter kernel mode as the result of a call to a Windows API function, or because of an external event: when a device driver needs to process an interrupt or DPC, the code runs in the context of whichever thread happens to be active at the time. This avoids the overhead of a context switch, but means that such code has to be able to run in an arbitrary context.
(Kernel-mode code can bypass the security model, but has to be careful not to leak this access out to the user-mode process that it is running in. For example, if kernel-mode code running in the context of an arbitrary thread opens a handle, it has to mark it as a kernel-only handle; otherwise, the user mode process could gain access to it.)
The System process is a special case; its threads run only in kernel mode. This allows device drivers and the kernel to do background processing that is not directly in response to an external event. It is also possible for a device driver to create a kernel-mode thread in a user-mode process.
Although they are still running in user-mode, processes running as SYSTEM are given privileges that are not (in the default configuration) given to processes running in an administrative context. For example, they have SeTcbPrivilege (“act as part of the operating system”) which allows them to do things like using SetTokenInformation to change the Remote Desktop session associated with a security token.
[Additional 2014-01-10: the same issue occurs with Adobe Creative Cloud and the CCP. In this case the outbound attempts are on both port 80 and port 443, so you’ll need two rules.]
Here’s the scenario: when using AAMEE (the “Adobe Application Manager Enterprise Edition”) to install Adobe Creative Suite 6 on a machine  on an isolated LAN (i.e., a machine with no direct connection to the internet) it takes a very long time to install. A very, very long time. During most of this time CPU and disk activity are minimal.
It turns out that the installer is trying to contact various web sites on the internet, including crl.verisign.net, presumably in order to see whether any of the digital certificates on files in the install media have been revoked. When the attempt times out, the installer continues, but it makes many such attempts.
The workaround I recommend is to install an outbound Windows Firewall rule blocking web traffic. Windows then instantly fails any attempt to contact a web site. You can do this from an administrative command line like this (split for readability):
netsh advfirewall firewall add rule name="block www" dir=out action=block protocol=tcp remoteport=80
To remove the rule later, this is the command:
netsh advfirewall firewall delete rule name="block www"
So how much time does this save? Well, on one of our new machines, the installer takes 30 minutes if the firewall rule is in place. Without it, it takes five hours.
I’ll be raising this with Adobe and perhaps, if we’re lucky, the next version won’t be quite so vigorous in its efforts to check the digital signatures. In the meantime, you may find this approach useful.
 The same problem may exist when using the regular installer; I haven’t checked.
One problem that comes up moderately frequently when dealing with Windows servers is that the Network Location Awareness service (NLA) doesn’t allow you to assign a particular adapter to a particular network. On ordinary networks NLA seems to do a reasonable job, but typically it can’t cope with special-purpose networks such as SANs or point-to-point links; these are all lumped together as the “Unidentified Network” which by default is contained in the Public network profile.
Why is this a problem? Because the Windows Firewall is configured on a per-profile basis. That’s fine if Windows Firewall doesn’t interfere with whatever you’re doing on the special-purpose network. Unfortunately, sometimes it does.
If you search the internet, you’ll find a number of scripts which change which network profile the “Unidentified Network” is put into. You can also do this with group policy. This means you can make unidentified networks Private, and turn Windows Firewall off (or set relaxed rules) for Private networks. Is this a solution? Not really, because it doesn’t just affect your special-purpose network, it affects every unidentified network from now on. So if, for any reason, NLA ever fails to identify the network associated with your primary internet connection, your firewall will go down. I’m not happy about that.
I’ve found a possible workaround. IMPORTANT: I haven’t tested this thoroughly, and at the moment I’m not planning to use it on my production server (or at least only if every other option fails). I’ve only tried it on Windows 2012 although I suspect it will work on older versions as well. It is entirely possible that it only works for certain ethernet drivers. Try this only at your own risk and please take proper and adequate precautions.
In addition to adapter settings such as the DNS prefix, NLA uses the default gateway’s MAC address to uniquely identify the network. Special-purpose networks don’t generally have a default gateway; if yours does, you probably don’t have this problem in the first place! The idea is to create a fictitious default gateway, with suitable parameters, and the trick is that you have to give the fictitious gateway the same ethernet address as the local network adapter on the special-purpose network in question. If you give it a make-believe ethernet address, it won’t work; you could instead give it the address of another machine on the same network, but then it won’t work if that machine is ever off-line.
(If your special network is a point-to-point link, you might instead prefer to specify the actual IP address at the other end of the link as the default gateway, if you don’t mind that NLA will see it as a new network if the ethernet address ever changes.)
So, for example: when I use get-netadapter in PowerShell I see the following results:
PS C:\Users\Administrator> get-netadapter Name InterfaceDescription ifIndex Status MacAddress LinkSpeed ---- -------------------- ------- ------ ---------- --------- Ethernet 2 Broadcom BCM5709C NetXtreme II Gi...#42 13 Disconnected 00-10-18-EC-7F-84 0 bps SAN 2 Broadcom BCM5716C NetXtreme II Gi...#41 15 Up 08-9E-01-39-53-AB 1 Gbps SAN 1 Broadcom BCM5709C NetXtreme II Gi...#43 12 Up 00-10-18-EC-7F-86 1 Gbps UoW Broadcom BCM5716C NetXtreme II Gi...#40 14 Up 08-9E-01-39-53-AA 1 Gbps
In order to assign a specific network to the third adapter (interface index 12) I would use the following commands:
new-netroute 0.0.0.0/0 -interfaceindex 12 -nexthop 192.0.2.1 -publish no -routemetric 9999 new-netneighbor 192.0.2.1 -interfaceindex 12 -linklayeraddress 001018ec7f86 -state permanent
The first command assigns the default gateway for the interface. I chose IP address 192.0.2.1 because it is reserved and will never be used by a real device; I suggest you do the same. We don’t want this route to be published, and we set the metric to 9999 so that it won’t ever be used. (The system uses the default gateway with the lowest metric.)
The second command assigns the fictitious IP address an ethernet address; as discussed before, we use the same ethernet address as the adapter we’re assigning it to. Note that the ethernet address must be entered in a different format to that in which it is displayed; just remove the hyphens. We make the mapping permanent. (You can use the remove-netneighbor command if you want to remove it later.)
I hope this is helpful. If you do try it, please let me know how it goes.
There’s been some talk lately about Mansuripur’s Paradox, e.g., see Slashdot.
Short answer: No.
Long answer: Hell, no.
BNZ (link here) and ASB (link here) have both recently reported that POLi have been spoofing their respective internet banking sites in order to process payments, meaning that banking passwords, any other applicable authentication information, and private banking information have been passing through POLi’s servers when POLi is used.
The banks have warned customers not to use POLi, although BNZ seems to be sending some mixed messages.
Looking at POLi’s terms and conditions there are some major warning signs. The disclaimer of liability is probably unavoidable (though still not acceptable IMO; see below) but terms like “You will not monitor or alter the execution of POLi™ using tools external to POLi™” are neither. They want us to trust that their software is safe to use, but they don’t want anyone to check on what it’s actually doing? Yeah, right.
The POLi client is basically, from what I can gather, a special-purpose web browser. While that limits exposure to security bugs, it doesn’t eliminate it, so it is also worrying that I can’t find any security bug reports either on major third party sites such as Secunia or on POLi’s own web site. There should be at least the occasional report that “someone found a bug and we’ve fixed it” and the absence of these suggests that it really hasn’t had enough attention from the white hats. The alternative is that POLi have figured out a way to write software without bugs; that’s basically the Holy Grail of modern computing, and if they had the secret of perfect software they’d all be fabulously wealthy and retired on private Hawaiian islands by now!
The real killer, though, is POLi’s own response to these claims (PDF). Most importantly, the part where they deny that “POLi is spoofing/mirroring the ASB website” and claim that, instead, “POLi is providing a pass through service whereby the bank sites are accessed via our secure servers.”
Uh, hello? Those two sentences mean the exact same thing.
POLi say they aren’t capturing customer’s authentication or other private information. Well, good for them. But they could. Their software allows them to do it. (It pretty much has to; otherwise there would be no way for the merchant to know they had been paid.) That means it also allows anyone who manages to hack into their servers to do it. This article on ZDNet lists some of the companies whose secure systems were breached this year: Symantec, Amazon.com-owned Zappos, Stratfor, Global Payments, LinkedIn, Yahoo – even the Chinese Government, for heaven’s sake. Are POLi really so arrogant in the light of all this that they think their security is impenetrable?
Well, of course, they probably don’t think that. They just want us to.
They also offer to let the banks audit the software. Kind of pointless, really; since the software allows POLi’s servers to spoof the banking sites (oh, sorry, “provide a pass through service”) it has failed any credible audit in advance. Any audit of the servers themselves would be good only on the day it was performed, at best.
I’m also amused by POLi’s claim on their web site (link here) that “Your confidential information is not disclosed to any third party, including us!” which I can only assume is based on a creative definition of the word “us” which excludes their servers. True enough, the information probably doesn’t leave their secure servers and is probably deleted as soon as the transaction is complete, but that doesn’t mean that it isn’t being “disclosed” to “us” – not by any reasonable definition of those two words, at any rate.
They also say that “POLi checks the bank website’s SSL certificate and thumbprints to always ensure you are talking directly to your bank.” So which is it, exactly? Directly to your bank like the FAQ says or via a pass-through service like the announcement says? These are mutually exclusive possibilities, so it has to be one or the other, and either way I’m not exactly filled with confidence.
Besides, in practical terms it doesn’t matter how good POLi’s security is. Yours isn’t  because today’s consumer operating systems are still based on old designs which did not have security in mind. If your computer becomes infected a hacker could easily modify the POLi client to behave maliciously.
Of course, said hacker could also modify your web browser to behave maliciously. The difference is that if that happens, BNZ, at least, will cover your losses. It isn’t clear that they will if POLi is involved, and POLi definitely won’t.
Until and unless your bank makes a public statement that they will cover POLi-related losses, don’t use it. Just don’t. Uninstall the client if you have it installed. Ask your merchant to provide an alternative, or, if applicable, choose a different merchant. For example, both Ascent and Mighty Ape NZ  accept internet banking payments without needing any special client software, although granted you then have to wait for the payment to go through before they will ship the goods.
A small price to pay, I think.
 To minimize your risks, make sure you use a standard user account (not an admin account) for your everyday activities, and use a different standard user account for your internet banking (and nothing else). Better still, get a live DVD (a DVD which you can boot to, containing a simple operating system) and use that for internet banking. This doesn’t change anything I’ve written here. Both of these approaches are much better than nothing, but neither is foolproof.
 I have no association with either company except as a satisfied customer.