An avocation as well as my vocation.

clock_gettime on MacOS
I often run into the issue of the lack of clock_gettime(CLOCK_MONOTONIC,...) on MacOS prior to OS X 10.12 Sierra when porting software to the Mac. (And the version in Sierra is not so hot; it's limited to microsecond resolution rather than nanosecond resolution, so I read on the net.) I usually depend on a DuckDuckGo search to find the canonical answer, but today I noticed that the top hits are mostly wrong and don't even include a link to the canonical solution.
So, I'm including a link to the solution here, so I can find it the next time I need it. The answer is in Apple Technical Q&A QA1398.
Changing UID on Mac OS X El Capitan
I've been putting off changing my Mac UID to match that of my current employer, so that I can use an NFS mount of my work home directory. I still have the scars from the last time I had to do this. Unfortunately I left my notes on a wiki at my old employer and failed to retain a copy, so I had to figure it out all over again.
I found these notes from Roman at inteller.net to be very helpful. I decided to script the change so that I'd have it the next time I need to change my UID. The hardest bit was dealing with the spaces within filenames in the Bourne shell script. I used a bash-ism, but setting the IFS would also have worked in standard Bourne shell.
Here's the script. You can also download it directly.
Caution: Make a backup before running the script!
Update:
The previous version of this script attempted to find and change the pathnames of files that contain the UID as part of their pathname. Phillip Law reported that the regular expression was insufficiently-strict and would match files that happen to have the UID embedded in a string of other numbers, e.g. files named with a hash or a GUID. I make a few attempts to repair this error, but it wasn't straightforward to fix it.
Meanwhile, Phillip pointed out that it isn't necessary to update the filenames of files in places like /Library/Caches and /private/var. The system will recreate them if necessary with the correct UID in their name; it's essentially the same situation as when restoring your home directory from a backup. While the system may fail to clean up the files with the old UID in their name, at worst this should only leak a small amount of disk space.
So, I've amended the script to remove the buggy portion that attempted to find and rename files with the UID in their name.
I've also added some checks to validate the provided user name and to guard against running the script while the user is logged in.
Update 2018-02-15:
I’ve finally incorporated the bug-fix suggested in the comments below by Jean-François Beauchamp.
#! /bin/sh # actually uses some bash extensions; see comment on line 36 # # change the UID of a login on Mac OS X 10.11 El Capitan # # run this script using sudo *from another user account*!!! # after logging out of the account to be changed. # # Usage: sudo change-uid <user-name> <new UID> # # sbin is in the path to pick up /usr/sbin/chown PATH=/usr/bin:/bin:/usr/sbin:/sbin # Two arguments: the user name and the new UID. case $# in 2) USERNAME=$1 ; NEW_UID=$2 ;; *) echo 'Usage: change-uid <user-name> <new UID>' 1>&2; exit 1 ;; esac if dscl . -ls /Users | egrep -v '_.*|com.apple.*|daemon|root|nobody' | grep $USERNAME then : # USERNAME is a valid user-name else echo "'$USERNAME' is not a valid user on this system" >&2 exit 1 fi if [ `whoami` = "$USERNAME" ] then echo "You cannot run this script from the '$USERNAME' account." >&2 echo "You must run this script from a different account." >&2 exit 1 fi if users | grep -q $USERNAME then echo "'$USERNAME' is logged in. '$USERNAME' must be logged out to run this script." >&2 exit 1 fi # Obtain the current UID for the specified user. OLD_UID=`dscl . -read /Users/$USERNAME UniqueID | cut -d ' ' -f2` # Validate that the new UID is a string of numbers. case $NEW_UID in (*[!0-9]*|'') echo 'Error: new UID is not numeric.\nUsage: change-uid'1>&2; exit 1 ;; (*) echo "changing '$USERNAME' UID from $OLD_UID to $NEW_UID" ;; esac # First change ownership of the locked files. # Otherwise, the find for the not-locked files will complain # when it fails to change the UID of the locked files. echo "chown uchg files in /Users/$USERNAME" find -x /Users/$USERNAME -user $OLD_UID -flags uchg -print | while read do # use bash-ism REPLY to accommodate spaces in the file name chflags nouchg "$REPLY" chown -h $NEW_UID "$REPLY" chflags uchg "$REPLY" done if [ -d /Users/Shared ] then echo "chown uchg files in /Users/Shared" find -x /Users/Shared -user $OLD_UID -flags uchg -print | while read do chflags nouchg "$REPLY" chown -h $NEW_UID "$REPLY" chflags uchg "$REPLY" done fi # Now change ownership of the unlocked files. echo "chown /Users/$USERNAME" find -x /Users/$USERNAME -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID if [ -d /Users/Shared ] then echo "chown /Users/Shared" find -x /Users/Shared -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID fi echo "chown /Library" find -x /Library -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /Applications" find -x /Applications -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /usr" find -x /usr -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /private/etc" find -x /private/etc -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /private/var" find -x /private/var -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /.DocumentRevisions-V100" find -x /.DocumentRevisions-V100 -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID echo "chown /.MobileBackups" find -x /.MobileBackups -user $OLD_UID -print0 | xargs -0 chown -h $NEW_UID sync # Finally, change the UID of the user. echo "changing $USERNAME UniqueID to $NEW_UID" dscl . -change /Users/$USERNAME UniqueID $OLD_UID $NEW_UID
GNU Make and mult-variant builds
I've done multi-architecture out-of-tree build systems using GNU Make in the past, but I've just had to write my first multi-variant build system. I found multi-variants are much harder than multi-architectures, primarily because GNU Make's pattern-matching in pattern rules is too weak to parse multiple variant qualifiers out of a target name. Perhaps the recent addition of Guile to GNU Make would solve that problem, but I was reluctant to use it as I wanted to maintain compatibility with older versions of Make.
Linux systemd
I'd seen a little of the controversy in the Linux world over systemd, but hadn't formed an opinion. Mac OS X has launchd which seems to work well. Systemd seems like a similar concept.
After installing and configuing Fedora 20, though, I've now got some scars from fighting with systemd. Systemd can't seem to start ypbind, who knows why, so NIS doesn't come up on a reboot. The default error log messages are useless; you have to ask for the long form of the log messages before you can find out what went wrong. For some reason, ssh takes forever (like 10 or 15 minutes) to get to the point of allowing logins; an attempt before then hangs after connecting to sshd but before running the login code. So I am starting to understand some of the frustration folks have with systemd.
I also grok the feaping creaturism of systemd, but in its defense I have to say that on a system that is crazy enough to have an out-of-memory killer, running as init is the only way to make sure a critical service stays up.
NVMe over Fabrics
I find it amusing that NVMe, after touting how it had discarded all that old unneeded SCSI baggage in the name of streamlining the protocol, is slowly reintroducing much of what it had discarded. For example, NVMe 1.1 added SCSI reservations back in to the architecture.
One of the big simplifications of NVMe was that, rather than send the command in a message, the protocol places commands in shared memory and has the initiator post a pointer to the command in an I/O request queue. The target is notified via a PCI register write when the request queue is not empty, and it pulls pointer from the queue, fetches the command from shared memory, and execute it.
Now in NVMe over Fabrics, as they scale up the NVMe architecture to larger fabrics than a PCIe bus, they are finding problems that they are solving by reintroducing sending commands as messages.
I find it terribly ironic, but also sad that not-invented-here hubris is resulting in the industry inventing a same-but-different I/O architecture.
NVMe and Linux
Looking at Intel's IDF slides on NVMe which claim that the OS path-length for the NVMe I/O stack is less than half that of the SCSI stack, I am reminded of Luben Tuikov's effort to bring sanity to the Linux SCSI stack back in 2005. Could it be that NVMe is Intel's way of routing around James Bottomley? :-)
John McCarthy
I use this photo to scare the new hires in our group :-)
I scarfed it off the net somewhere; many thanks to the original author.
Transactions for Non-Volatile Memory
Now that the SNIA Technical Working Group (TWG) on Non-Volatile Memory (NVM) Programming has published version 1 of the NVM Programming Model, the TWG is looking at friendlier programming interfaces such as transactions. I think it's far to early to attempt to standardize persistent transactional memory, but perhaps some rudimentary infrastructure can be specified that can later become part of the run-time of a full-blown persistent transactional memory implementation or some other full-featured transaction library. In the meantime programmers could code directly to this lower level interface and at least gain some benefit over writing their own transaction system.
Additionally, providing even a primitive transaction interface gives the storage system the advantage of visibility of the application transaction boundaries. This visibility enables the storage system to optimize data management operations such as making off-node copies for high availability or disaster recovery, or making an application-consistent snapshot.
One interesting line of investigation follows Satyanarayanan's Lightweight Recoverable Virtual Memory (RVM) concept. RVM is a simple library that provides atomicity and durability but leaves implementation of the rest of the ACID properties to the application. It gains its simplicity by requiring the programmer to specify the ranges of recoverable memory which it intends to modify during a transaction. This avoids the need for hooks into the virtual memory system or the compiler and language run-time that discover the write-set of the transaction. (Since the library does not provide isolation, there is no need to discover the read-set. There is a modern implementation of RVM that provides a static analysis tool to assist the programmer in specifying the write-set.) The RVM system was used in the implementation of the Coda file system from CMU.
Peter Chen's work in Reliable Memory (RAM I/O, or Rio) is very applicable to NVM. The programming model of the Rio File Cache is nearly identical to the Persistent Memory mode of version 1 of the NVM Programming Model. David Lowell and Peter Chen later created a variant of Satyanarayanan's RVM that is optimized for reliable memory. They described their Vista RVM in their paper Free Transactions with Rio Vista. They were able to simplify and speed up the RVM implementation by eliminating the on-disk redo log, since their in-memory undo log is persisted by the Rio file cache.
One criticism of the RVM model, recently raised in the TWG by Hans Boehm, is that it is ill-suited to today's more modular programming styles. These RVM systems were developed in the 1980's and 1990's. At that time systems programming was done in a raw C style where the programmer was explicitly aware of memory allocation. In this style, it is fairly easy to pre-declare the write-set of a transaction. However, in today's more modular or object-oriented programming style it is quite common for modules to hide memory allocation from the programmer. One might even argue that such hiding is one of the principal benefits of this style of programming. In the modern style, it is effectively impossible for the programmer to pre-declare the write-set of a transaction that calls libraries or sub-modules.
I think this is a quite persuasive argument. Hans argues for a write-ahead after-image redo log design instead of RVM and Vista's before-image undo log. In place of pre-declaring the memory to be changed by a transaction, such a design would provide both the location of the memory and the data to be written in that location to the transaction library. This is equivalent to an ordinary filesystem write system call except that a group of such calls are grouped into an atomic transaction. While the application does not know where a library or sub-module will allocate memory, it may be able to discover the location and size of the allocated structures after-the-fact, and so be able to utilize this write-ahead logging variant of RVM for transactional persistence.
The write-ahead log approach introduces an additional copy of the data that is not needed in the before-image logging approach. The additional copy is unfortunate because one of the principal goals of the NVM Programming TWG is to maximize the performance of the interface. The underlying persistent memory, whether it be power-protected DRAM backed by flash in an NV-DIMM or some new kind of storage-class memory, is expected to be nearly as fast as DRAM. The before-image logging of Vista requires one copy of the before-image of the write-set. The write-ahead log approach requires one copy of the after-image of the write-set to the write-ahead intent log followed by copying the after-image to the persistent data. This additional copy doubles the time required for the I/O.
Moreover, today's highly modular object oriented systems usually completely hide the location and size of their internal data structures. These programs perform I/O by iterating over all their object instances and asking the object to serialize itself on a supplied output stream, rather than discovering the memory locations of the object instances and explicitly writing them to an output stream. So even a write-ahead log design for transactions is difficult for a programmer to apply in a modern object oriented program.
An alternative transaction interface uses the page protection features of the virtual memory subsystem to determine the write-set of the application at run-time. Lowell and Chen's Vista version of RVM provides this option as well. Such a VM-based mechanism solves the modularity problem but it may introduce a high overhead. The write-set is only discovered with VM-page-level granularity. If the program's persistent data structures do not have a high degree of spatial locality, this coarse granularity results in logging many unnecessary memory locations with the attendant high cost in both log storage space and execution time. One of the attractions of the new NVM technology is that, due to the zero cost for random access, it enables programs to use persistent data structures that do not have a high degree of spatial locality; that is, to not be restricted to disk-friendly data structures. A low degree of spatial locality imposes unacceptable overhead costs on a transaction design that uses VM page-protection to detect the write-set at run-time.
Perhaps the TWG should provide all three interfaces and, in the near-term, let the programmer decide which trade-off is appropriate. In the long-term we can hope for persistent transactional memory extensions to programming languages, compilers, and language run-times, or CPU hardware support for fine-grained tracking of writes at run-time.
OmniOutliner Pro to MediaWiki export
I understand that an alternative to a large mediawiki document is to create a tree of small mediawiki pages. Still, I sometimes prefer to create a document as a single page; sometimes I want a modernist linear narrative rather than a postmodern nonlinear link-fest.
For small to medium-sized documents, I like to use SubEthaEdit because it's a good basic editor with a user-contributed mediawiki syntax coloring plugin.
For medium to large documents, where the topic folding, editing, and navigation offered by an outliner is very helpful, I like to use OmniOutliner Pro. I've modified Fletcher T. Penney's MarkDown XSLT OmniOutliner plugin to export from OmniOutliner Pro to mediawiki.
You can download my MediaWiki export plugin from my software area or directly from here.
You can also download the source Xcode project.
Perforce on the Mac
I much prefer Xcode's FileMerge to Perforce's p4merge application. Now if I could only remember how to set the Perforce client up to use FileMerge instead of p4merge ...
Update: