This post was originally published on this site

Futurist > Best Futurists Ever > Robert A. Heinlein

RAHeinlein autographing Midamericon

By Martin Anderson

Among the most influential and prescient science-fiction authors of the twentieth century, Robert A. Heinlein’s contributions to culture extend from linguistics and social theory through to innovations in furniture.

His prognostications about the destiny of mankind are informed as much by our history as our potential, with a depth of understanding that makes classic novels such as Stranger In a Strange Land works of enduring world literature.

Heinlein preferred to call his output “speculative fiction.” to distinguish it from the exploitative “space opera” fare which he considered to have tarnished the SF literary genre.

Though many of his themes were broad, his interest in the technological and political future of humanity nonetheless brought up some fascinating as well as accurate predictions about the coming decades.

Nuclear arms development and the Cold War

Heinlein’s 1941 short story Solution Unsatisfactory prefigures the tension that would develop between the United States and the Communist Eastern Bloc in the decades after WWII.

This included nuclear weapons and the fear of radioactive fallout that captured popular imagination throughout the Cold War.

In particular, lethal “radioactive dust” is used in Solution Unsatisfactory by the Allies to devastate Germany and bring the Second World War to a conclusion. Yet this weapon also brought with it the understanding of a devastating technology that threatens to destabilize the world order:

Someone in the United States government had realized the terrific potentialities of uranium 235 quite early and, as far back as the summer of 1940, had rounded up every atomic research man in the country and had sworn them to silence. Atomic power, if ever developed, was planned to be a government monopoly, at least till the war was over.

It might turn out to be the most incredibly powerful explosive ever dreamed of, and it might be the source of equally incredible power. In any case, with Hitler talking about secret weapons and shouting hoarse insults at democracies, the government planned to keep any new discoveries very close to the vest.

In the story, the Department Special Defense Project No. 347 unknowingly shadows the creation and development of the top-secret Manhattan Project, the US effort to develop the first effective nuclear weapon, which would indeed conclusively end the war—though in the East, rather than the west.

Although Heinlein did not predict the explosive nature of nuclear weapons, he accurately wrote that toxic radiation would be the primary cause of death when using atomic weaponry.

Interestingly, the author gets it exactly right at one point, describing the sheer destructive scale of a nuclear device:

We had a vision of a one-ton bomb that would be a whole air raid in itself, a single explosion that would flatten out an entire industrial center.

However, Heinlein decides that the idea is too fantastical, and proceeds along the lines of a weapon that effectively uses fallout—an extension of the poison gas used to such controversial effect in the previous world war.

Heinlein also predicts the range of an intercontinental ballistic missile (ICBM), a year before the first test flight of Wernher Von Braun’s V2 rocket.

…The problem was, strangely enough, to find an explosive which would be weak enough to blow up only one county at a time, and stable enough to blow up only on request. If we could devise a really practical rocket fuel at the same time, one capable of driving a war rocket at a thousand miles an hour, or more, then we would be in a position to make most anybody say ‘uncle’ to Uncle Sam.

Solution Unsatisfactory extrapolates a timeline from the discovery of uranium fission in 1938, which led to the identification of nuclear fission the following year, and the formation of the Advisory Committee on Uranium. Knowledge about the potential of splitting the uranium atom had entered the public consciousness before security concerns around the war cast a security veil over all ongoing research.

In the story, the government envisions a global pax Americana, with the US as the trustworthy guardian of the ultimate weapon, and the ultimate deterrent to aggression:

[The weapon] is not just simply sufficient to safeguard the United States, it amounts to a loaded gun held at the head of every man, woman, and child on the globe!

But Heinlein delivers a more pragmatic view about how new military technologies develop and spread in the real world, and how the post-war arms race would actually play out in nearly a half-century in the shadow of Mutually Assured Destruction (MAD):

…It’s like this: Once the secret is out-and it will be out if we ever use the stuff! — the whole world will be comparable to a room full of men, each armed with a loaded .45. They can’t get out of the room and each one is dependent on the good will of every other one to stay alive. All offense and no defense.

Computer Aided Design (CAD)

Heinlein also showed foresight into the world of design. In his 1957 novel The Door Into Summer, he prefigured the advent of the early Computer Aided Design (CAD) systems of the 1960s, which ultimately developed into modern architectural and 3D design software.

In the book, an inventor takes architectural drafting out of its traditional artisanal niche and into its ultimate logical space—numerically precise, computer-driven plotting:

This gismo [SIC] would let them sit down in a big easy chair and tap keys and have the picture unfold on an easel above the keyboard. Depress three keys simultaneously and have a horizontal line appear just where you want it; depress another key and you fillet it in with a vertical line; depress two keys and then two more in succession and draw a line at an exact slant.

Cripes, for a small additional cost as an accessory, I could add a second easel, let an architect design in isometric (the only easy way to design), and have the second picture come out in perfect perspective rendering without his even looking at it. Why, I could even set the thing to pull floor plans and elevations right out of the isometric.

The first recognizable CAD program emerged six years later in the PhD thesis work of MIT scholar Ivan Sutherland.Entitled Sketchpad, the system pioneered the idea of “instances” from template objects, now common in 3D and CAD software, where changing the original object passes those changes on to all the “live” copies.

The program was punched into tape and fed into a computer occupying 1,000 square feet and boasting 320 kb of memory, which in itself took up a cubic yard of space. You can see the system in action, including the then-revolutionary use of a touchscreen via a light pen, here.

Voice recognition technologies

The Door Into Summer also cautiously mooted the possibility of natural language recognition and speech-to-text technologies, anticipating the difficulties involved in computer-driven transcription:

[I] had expected that there would be automatic secretaries in use — I mean a machine you could dictate to and get back a business letter, spelling, punctuation, and format all perfect, without a human being in the sequence. But there weren’t any. Oh, somebody had invented a machine which could type, but it was suited only to a phonetic language like Esperanto and was useless in a language in which you could say: ‘Though the tough cough and hiccough plough him through.’

Though Bell Labs’ Audrey prototype speech recognition experiment pre-dated the novel by five years, it required a far greater amount of subject-specific training than the few minutes most iPhone users spend calibrating Siri, and could only recognize spoken numbers between zero to nine.

The Shoebox speech recognition system that IBM displayed a decade later at the World’s Fair added operators such as “”minus” and ‘plus’ to extend this vocabulary to 16 maths-related words to create a rudimentary speech-driven calculator.

The first functional speech recognition system emerged with Carnegie Mellon’s Harpy project in 1976, developed with support from The Defense Advanced Research Projects Agency (DARPA). Harpy was capable of recognising more than 1,000 words, about equivalent to the vocabulary of a three-year-old.

The Internet

Visionary science-fiction writers have been predicting a global information network since before there was any basis to make that imaginative leap.

Amongst others, Mark Twain envisaged a machine that could connect the world in a short story from 1898; E.M. Forster described a worldwide messaging and video conferencing system in 1909; sci-fi legend Isaac Asimov wrote of “computer outlets” hooked up to enormous libraries of global knowledge; and contemporary writer William Gibson anticipated the internet by eight years in the short story Burning Chrome in 1982, and at greater length in Neuromancer in 1984.

In the posthumously-published novel For Us, The Living: A Comedy of Customs, written in 1938, Heinlein imagines a national information network. However, it’s an analog solution: a vast lattice of pneumatic tubes threading the country, through which one can send requests to librarians for photocopies of articles. As a network solution, it’s closer to the administrative tubes of the Ministry of Truth in Orwell’s 1984.

The author returned to the idea with far greater authenticity in 1983 with the novel Friday, where he gives an account of the random nature of link-exploration that is far nearer to how we currently use the Internet than the more abstract accounts in Neuromancer and similar fiction of the period.

The central character in the book describes herself as The World’s Greatest Authority, a moniker borrowed from the late US comedian Irwin Corey. At one point she describes the startlingly Google-like process of how she stumbled across his work:

At one time there really was a man known as ‘the World’s Greatest Authority.’ I ran across him in trying to nail down one of the many silly questions that kept coming at me from odd sources. Like this: Set your terminal to “research.” Punch parameters in succession “North American culture,” “English-speaking,” “mid-twentieth century,” “comedians,” “the World’s Greatest Authority.” The answer you can expect is “Professor Irwin Corey.” You’ll find his routines timeless humor.

Sensor-driven lights

We’re becoming increasingly used to lights that turn themselves on or off when we enter or leave a room. Since they’re usually activated by movement, it’s not yet a perfect solution. In the 1950 novella The Man Who Sold The Moon, Robert A. Heinlein envisages a better approach”

As they left their joint office, Strong, always penny conscious, was careful to switch off the light. Harriman had seen him do so a thousand times; this time he commented. ‘George, how about a light switch that turns off automatically when you leave a room?’

‘Hmm—but suppose someone were left in the room?’

‘Well. . . hitch it to stay on only when someone was in the room—key the switch to the human body’s heat radiation, maybe.’

At present the heat-sensing light-switch is not a mainstream consumer product, though Passive infrared (PIR) sensors are gaining commercial ground.

The waterbed

Heinlein’s 1961 bestselling novel Stranger In A Strange Land is perhaps his best-known work, and his most influential.

The US Library of Congress numbers it among the 88 books that shaped America; it contributed a new word— ‘grok’ permanently to English literature, and to computer culture; it even created an enduring new church movement centered around the core themes and ideas of the novel.

Yet the only distinct technological prediction that emerged from it was the curious idea of the waterbed—now considered one of the oddest crazes of the 1970s.

Heinlein had first described a waterbed in his 1942 novel Beyond This Horizon, and then in the Hugo award-winning Double Star in 1956:

Against one bulkhead and flat to it were two bunks, or “cider presses”, the bathtub-shaped hydraulic, pressure-distribution tanks used for high acceleration in torchships…

Two gravities is not bad, not when you are floating in a liquid bed. The skin over the top of the cider press pushed up around me, supporting me inch by inch; I simply felt heavy and found it hard to breathe.

The waterbed in Stranger In A Strange Land likewise has therapeutic properties:

This young man Smith was busy at that moment just staying alive. His body, unbearably compressed and weakened by the strange shape of space in this unbelievable place was at last somewhat relieved by the softness of the nest in which these others had placed him…
The patient floated in the flexible skin of the hydraulic bed. He appeared to be dead.

The origin of this fascination was Heinlein’s own experience as a bedridden convalescent after being discharged from the US Navy in 1934 for tuberculosis. In Expanded Universe (1980), he describes how the idea occurred:

I designed the waterbed during years as a bed patient in the middle thirties; a pump to control water level, side supports to permit one to float rather than simply lying on a not very soft water filled mattress. Thermostatic control of temperature, safety interfaces to avoid all possibility of electric shock, waterproof box to make a leak no more important than a leaky hot water bottle rather than a domestic disaster, calculation of floor loads (important!), internal rubber mattress and lighting, reading, and eating arrangements—an attempt to design the perfect hospital bed by one who had spent too damn much time in hospital beds.

Heinlein’s descriptions over the three novels were so detailed as to defeat US entrepreneur Charles Hall, who wanted to patent the idea in the 1960s. Ultimately Hall was able to begin manufacturing, and is currently attempting to interest millennials in a waterbed revival.

A legacy of the future

Heinlein’s predictions combine a broad visionary streak with an inventor’s practical curiosity. Though he wrote about the spiritual destiny of our species, much of his most accurate foreshadowing arose from inconveniences that he experienced in his own life. As we’ve seen, several of his predictions are still maturing.

With a varied and evolving output that would divide his fans throughout his career, Heinlein never achieved stable and consistent recognition either during his lifetime or posthumously. His prose and ideas lacked the signature stylistic and thematic hallmarks that were to distinguish peers such as Ray Bradbury, Arthur C. Clarke, Philip K. Dick, and Isaac Asimov. But this intellectual restlessness may have contributed to his above-average ability to predict the technology of our times.

Image sources: Dd-b/WikiMedia Commons

In this post we will setup Rancher’s Convoy Storage Plugin with NFS, to provide data persistence in Docker Swarm.

The Overview:

This essentially means that we will have a NFS Volume, when the service gets created on Docker Swarm, the cluster creates these volumes with path mapping, so when a container gets spawned, restarted, scaled etc, the container that gets started on the new node will be aware of the volume, and will get the data that its expecting.

Its also good to note that our NFS Server will be a single point of failure, therefore its also good to look at a Distributed Volume like GlusterFS, XtreemFS, Ceph, etc.

  • NFS Server (10.8.133.83)
  • Rancher Convoy Plugin on Each Docker Node in the Swarm (10.8.133.83, 10.8.166.19, 10.8.142.195)

Setup NFS:

Setup the NFS Server

Update:

In order for the containers to be able to change permissions, you need to set (rw,async,no_subtree_check,no_wdelay,crossmnt,insecure,all_squash,insecure_locks,sec=sys,anonuid=0,anongid=0)

1
2
3
4
5
6
$ sudo apt-get install nfs-kernel-server nfs-common -y
$ mkdir /vol
$ chown -R nobody:nogroup /vol
$ echo '/vol 10.8.133.83(rw,sync,no_subtree_check) 10.8.166.19(rw,sync,no_subtree_check) 10.8.142.195(rw,sync,no_subtree_check)' >> /etc/exports
$ sudo systemctl restart nfs-kernel-server
$ sudo systemctl enable nfs-kernel-server

Setup the NFS Clients on each Docker Node:

1
2
3
4
$ sudo apt-get install nfs-common -y
$ mount 10.8.133.83:/vol /mnt
$ umount /mnt
$ df -h

If you can see tht the volume is mounted, unmount it and add it to the fstab so the volume can be mounted on boot:

1
2
$ sudo bash -c "echo '10.8.133.83:/vol /mnt nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0' >> /etc/fstab"
$ sudo mount -a

Install Rancher Convoy Plugin:

The Plugin needs to be installed on each docker node that will be part of the swarm:

1
2
3
4
5
6
$ cd /tmp
$ wget https://github.com/rancher/convoy/releases/download/v0.5.0/convoy.tar.gz
$ tar xzf convoy.tar.gz
$ sudo cp convoy/convoy convoy/convoy-pdata_tools /usr/local/bin/
$ sudo mkdir -p /etc/docker/plugins/
$ sudo bash -c 'echo "unix:///var/run/convoy/convoy.sock" > /etc/docker/plugins/convoy.spec'

Create the init script:

Thanks to deviantony

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
#!/bin/sh
### BEGIN INIT INFO
# Provides:
# Required-Start:    $remote_fs $syslog
# Required-Stop:     $remote_fs $syslog
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: Start daemon at boot time
# Description:       Enable service provided by daemon.
### END INIT INFO

dir="/usr/local/bin"
cmd="convoy daemon --drivers vfs --driver-opts vfs.path=/mnt/docker/volumes"
user="root"
name="convoy"

pid_file="/var/run/$name.pid"
stdout_log="/var/log/$name.log"
stderr_log="/var/log/$name.err"

get_pid() {
    cat "$pid_file"
}

is_running() {
    [ -f "$pid_file" ] && ps `get_pid` > /dev/null 2>&1
}

case "$1" in
    start)
    if is_running; then
        echo "Already started"
    else
        echo "Starting $name"
        cd "$dir"
        if [ -z "$user" ]; then
            sudo $cmd >> "$stdout_log" 2>> "$stderr_log" &
        else
            sudo -u "$user" $cmd >> "$stdout_log" 2>> "$stderr_log" &
        fi
        echo $! > "$pid_file"
        if ! is_running; then
            echo "Unable to start, see $stdout_log and $stderr_log"
            exit 1
        fi
    fi
    ;;
    stop)
    if is_running; then
        echo -n "Stopping $name.."
        kill `get_pid`
        for i in {1..10}
        do
            if ! is_running; then
                break
            fi

            echo -n "."
            sleep 1
        done
        echo

        if is_running; then
            echo "Not stopped; may still be shutting down or shutdown may have failed"
            exit 1
        else
            echo "Stopped"
            if [ -f "$pid_file" ]; then
                rm "$pid_file"
            fi
        fi
    else
        echo "Not running"
    fi
    ;;
    restart)
    $0 stop
    if is_running; then
        echo "Unable to stop, will not attempt to start"
        exit 1
    fi
    $0 start
    ;;
    status)
    if is_running; then
        echo "Running"
    else
        echo "Stopped"
        exit 1
    fi
    ;;
    *)
    echo "Usage: $0 {start|stop|restart|status}"
    exit 1
    ;;
esac

exit 0

Make the script executable:

1
$ chmod +x /etc/init.d/convoy

Enable the service on boot:

1
$ sudo systemctl enable convoy

Start the service:

1
$ sudo /etc/init.d/convoy start

This should be done on all the nodes.

Externally Managed Convoy Volumes

One thing to note is that, after your delete a volume, you will still need to delete the directory from the path where its hosted, as the application does not do that by itself.

Creating the Volume Before hand:

1
2
3
4
5
6
7
8
9
$ convoy create test1
test1

$ docker volume ls
DRIVER              VOLUME NAME
convoy              test1

$ cat /mnt/docker/volumes/config/vfs_volume_test1.json
{"Name":"test1","Size":0,"Path":"/mnt/docker/volumes/test1","MountPoint":"","PrepareForVM":false,"CreatedTime":"Mon Feb 05 13:07:05 +0000 2018","Snapshots":{}}

Viewing the volume from another node:

1
2
3
$ docker volume ls
DRIVER              VOLUME NAME
convoy              test1

Creating a Test Service:

Create a test service to test the data persistence, our docker-compose.yml:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
version: '3.4'

volumes:
  test1:
    external: true

networks:
  appnet:
    external: true

services:
  test:
    image: alpine:edge
    command: sh -c "ping 127.0.0.1"
    volumes:
      - test1:/data
    networks:
      - appnet

Creating the Overlay Network and Deploying the Stack:

1
2
3
$ docker network create -d overlay appnet
$ docker stack deploy -c docker-compose.yml apps
Creating service apps_test

Write data to the volume in the container:

1
2
3
4
$ docker exec -it apps_test.1.iojo7fpw8jirqjs3iu8qr7qpe sh
/ # echo "ok" > /data/file.txt
/ # cat /data/file.txt
ok

Scale the service:

1
2
$ docker service scale apps_test=2
apps_test scaled to 2

Inspect to see if the new replica is on another node:

1
2
3
4
5
6
$ docker service ps apps_test
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE               ERROR                         PORTS
myrq2pc3z26z        apps_test.1         alpine:edge         scw-docker-1        Running             Running 45 seconds ago
ny8t97l2q00c         _ apps_test.1     alpine:edge         scw-docker-1        Shutdown            Failed 51 seconds ago       "task: non-zero exit (137)"
iojo7fpw8jir         _ apps_test.1     alpine:edge         scw-docker-1        Shutdown            Failed about a minute ago   "task: non-zero exit (137)"
tt0nuusvgeki        apps_test.2         alpine:edge         scw-docker-2        Running             Running 15 seconds ago

Logon to the new container and test if the data is persisted:

1
2
3
$ docker exec -it apps_test.2.tt0nuusvgekirw1c5myu720ga sh
/ # cat /data/file.txt
ok

Delete the Stack and Redeploy and have a look at the data we created earlier, and you will notice the data is persisted:

1
2
3
4
5
$ docker stack rm apps
$ docker stack deploy -c docker-compose.yml apps
$ docker exec -it apps_test.1.la4w2sbuu8cmv6xamwxl7n0ip cat /data/file.txt
ok
$ docker stack rm apps

Create Volume via Compose:

You can also create the volume on service/stack creation level, so you dont need to create the volume before hand, the compose file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
version: '3.4'

volumes:
  test2:
    driver: convoy
    driver_opts:
      size: 10

networks:
  appnet:
    external: true

services:
  test:
    image: alpine:edge
    command: sh -c "ping 127.0.0.1"
    volumes:
      - test2:/data
    networks:
      - appnet

Deploy the Stack:

1
2
$ docker stack deploy -c docker-compose-new.yml apps
Creating service apps_test

List the volumes and you will notice that the volume was created:

1
2
3
4
$ docker volume ls
DRIVER              VOLUME NAME
convoy              apps_test2
convoy              test1

Lets inspect the volume, to see more details about it:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
docker volume inspect apps_test2
[
    {
        "CreatedAt": "0001-01-01T00:00:00Z",
        "Driver": "convoy",
        "Labels": {
            "com.docker.stack.namespace": "apps"
        },
        "Mountpoint": "/mnt/docker/volumes/apps_test2",
        "Name": "apps_test2",
        "Options": {
            "size": "10"
        },
        "Scope": "local"
    }
]

As mentioned earlier, if you delete the volume, you need to delete the data directories as well

1
2
3
4
5
6
7
$ docker volume rm test1
test1

$ ls /mnt/docker/volumes/
apps_test2  config  test1

$ rm -rf /mnt/docker/volumes/test1

More info about the project: – https://github.com/rancher/convoy

This post was originally published on this site

People have always looked for ways to make their work easier. It is a matter of debate whether using tools and automation are distinct human features. Or do we share them with other species? The fact is that we try to outsource our most mundane tasks to the machines. And that’s great!

Why automate?

Repetition often leads to boredom and fatigue. Boredom is the first step toward burnout, while fatigue is one of the major sources of mistakes. Since we don’t want our colleagues (or ourselves) to burn out as much as we don’t want to make costly mistakes, we try to automate our everyday tasks.

There seems to be a proliferation of software dedicated to the automation of common tasks. In the Node.JS ecosystem alone, there are (or used to be) solutions like Bower, Yeoman, Grunt, Gulp, and NPM scripts.

But there is a good standard UNIX tool. By standard I mean it actually has robust documentation, that many forgot. Or never learned? I’m talking about make. More accurately, this article focuses on GNU make. It’s available on macOS, Windows, Linux and most other operating systems.

make is so standard you already have it installed. Type make in a command line and check yourself. This piece of software came out in 1977, which means it’s pretty much battle-tested. Is it quirky? Yup, even for the ’70s standard. But it does its job well and this is all we want from it.

Isn’t Make for C/C++ code?

When you read about make you probably recall that there used to be such a tool to build C/C++ projects back then. It went like this ./configure && make && make install. Yes, we are talking about exactly the same tool. And frankly, it’s not limited to compiling C/C++ code. To be honest, it can’t even compile the code.

Pretty much all make understands is files. It knows whether a file exists or not and which file is more recent. The other half of its power lies in maintaining a dependency graph. Not much, but those two features are what constitute its power.

In order for make to actually do anything, you write a set of recipes. Each recipe consists of a target, zero or more dependencies and zero or more rules. Targets are files that you want to obtain. Dependencies are files needed to create or update the targets. The set of rules describes the process of transforming dependencies into targets. For example imagine you want to automate installation on Node.js packages:

This means the file node_modules (yes, directories are files too) can be derived from the file package.json by running the npm install rule. Still with me?

Now, those dependencies can be other targets as well. This means we can chain different set of rules and create pipelines. For example making test results directory dependent on build directory, build directory dependent on node_modules directory and node_modules dependent on package.json. Heck, we can even create package.json dynamically making it a target of another rule.

Remember when I mentioned make keeps track of which file is more recent? This is what actually saves us time. You see, without this feature, each time we run make test (following the example above) we would have to run the whole chain from the beginning (npm install, build, then test). But if nothing changed why install packages once again? Why build? Why run tests?

This is where make really shines. While figuring out the order of the jobs it checks the timestamps of targets and dependencies. It follows the rule only if

  • one or more of dependencies is more recent than the target, and
  • the target does not exist.

One thing! As make test won’t be actually creating a file named test we need to add this target as a dependency of a .PHONY target. That’s another convention. Like this:

In our example above, a single change in package.json would result in building everything from scratch. But if we only change the code of one of the tests, make would skip all the steps prior to tests. That is, provided the dependencies are written correctly.

But the Language I Use Already Has Its Own Build System…

Many modern programming languages and environments come with their own build tools. Java has Ant, Maven, and Gradle, Ruby has its Rake, Python uses setuptools. And if you are worried I’m about to take those toys away from you and replace them with make, you are mistaken.

Look at it this way: how much time is needed to introduce a person to your team and make that person productive? This means setting up the development environment, installing all the dependencies, configuring every moving part, building the project, running the project, maybe even deploying the project to a development environment.

Will a new hire start the actual work in a matter of hours? Days? Hopefully not weeks. Remember it’s not just the new hire’s time that’s wasted in the setup process. Somebody will also be bothered with lots of questions when things go wrong. And they usually do.

There is a convention I like to use in my projects. Since it’s a convention common to multiple projects, people are free to migrate between them. Each new project or each new person that’s introduced needs to learn this convention to reach the desired outcome. This convention assumes that projects have their own Makefiles with a set of predefined targets:

  • make prepare installs all the external applications that might be needed. Normally this is done with a Brewfile and Homebrew/Linuxbrew. Since this step is optional coders can choose their own installation method at their own risk
  • make dev sets up a local development environment. Usually, it builds and brings up Docker containers. But since make acts as a wrapper, it can be easily substituted by whatever is required (like npm serve)
  • make deploy deploys the code to the select environment (by default its development). Under the hood, it usually runs Ansible.
  • make infrastructure this is a prerequisite for make deploy as it uses Terraform to create said environments in the first place.
  • make all produces all the artifacts required for deployment.

You know what it means? It means the mandatory README.md can focus on the business needs of the project and outline some collaboration processes. At the end, we attach the above list so everyone knows what those targets are. This means that when you enter a new project all you have to do is to make prepare and make dev. After a few CPU cycles you have a working project in front of you and you can start hacking.

I Have a Continuous Integration Pipeline for That

At this point, some people may notice what am I getting at. Artifacts, steps, deployment, infrastructure. That’s what our Continuous Integration/Continuous Deployment pipeline does. I’m sure it does! But remember that CI/CD is not only there to run tests each time a new commit pops up.

Properly implemented CI/CD can help reduce debugging by making it easier to reproduce the issue and perform the root cause analysis. How? Versioned artifacts are one such means. They may help with finding the root cause, but not necessarily in fixing it.

To fix the bug you need to alter the code and produce your own build. See what I’m getting at? If you CI/CD pipeline can be mirrored locally developers can test and deploy tiny changes without the need to actually use the CI/CD pipeline thus shortening the cycle. And the easiest way to make your CI/CD pipeline available locally is by making it a thin wrapper around make.

Say you have a backend and a frontend and you have some tests for them as well (if not, you’re crazy running CD without tests!). This would make four distinct CI jobs. And they could be pretty much summed up as calling make backend, make test-backend, make frontend, make test-frontend. Or whatever convention you want to follow.

This way, no matter whether on a local machine or on CI, the code is built exactly the same way. Exactly the same steps are involved. The less commands go into your Jenkinsfile or .travis.yml (or some such) the less you rely on a Holy Build Machine.

Ok, But Does Anyone Actually Use Make?

It turns out, yes. If you look around you’ll find articles like “Time for Makefiles to Make a Comeback” (by Jason Olson). “The Power Of Makefile” (by Ahmad Farag). “Rewriting our deploy tooling: from Makefile to Bash and back again” (by Paul David). Or “Makefile for Node.js developers” (by Patrick Heneise). And these are articles from the last year, not some reminiscences from the past century.

Yes, I admit that make is clunky. I know about its many shortcomings and weird language features. But show me a better tool for actual development workflow automation and I’ll be glad to switch. Until then I’ll ROTFL looking at this:

https://asciinema.org/a/dQb0jENCYWsBOCC9UiKxxKG4x

It’s available on GitHub, if you want to fly it.

Cool, Now Show Me the Code

Here are some excerpts to show you what’s possible with make.

This snippet sets up a simple development environment. Since we want to make sure all developers use the same set of pre-commit hooks while working with Git we install those hooks for them.

To do this we need to install git-hooks (that’s the name of the hooks manager). We take advantage of the knowledge that git-hooks moves the original hooks to .git/hooks.old so we check for a presence of such a file to determine whether we want to run git hooks install or not.

One trick we use here is | do denote order-only dependencies. If you just want to make sure something exists, not that it change more recently than a target, order-only dependencies are the way to go. So far, so good, I suppose?

Now imagine we want to build a Docker container that contains a file we cannot distribute in our source code.

Since we cannot use the actual file created by Docker (because images have tight permissions), we do the second best thing. We create an empty file that indicates we have successfully run docker build at one point in time.

A common convention for such files calls them “stamps”. Our Docker image stamp depends obviously on Dockerfile, on source files and on another target, which runs curl to fetch the file obtaining credentials from environment variables.

Since we don’t want to print our credentials to the output we prefix the command with @. This means the rule itself is not printed to the screen. The output of the rule, however, isn’t silenced. Keep that in mind if any of the programs you want to run have a tendency of logging sensitive information to stdout or stderr.

Ok, we can set up git hooks and we can build some Docker images. Why not let developers create their own environments in the cloud and deploy to them?

The actual Infrastructure as Code and Configuration Management is out of the scope of this article. Let me tell you that terraform apply manages cloud resources and ansible-playbook performs configuration on remote machines. You probably know what docker push does. In short, it pushes the local image to Docker Hub (or any other registry) so you could access it from anywhere. At this point, I’m sure you can figure out what the above snippet does.

So, Who’s This Tool For?

Even though DevOps is rising in hype recently, there is still a lot of separation between the Dev and the Ops. Some tools are used solely by Dev, some solely by Ops. There is a bit of common ground, but how far it reaches depends on any given team.

Development package management, source code layout, coding guidelines are all the realms of Dev. Infrastructure as Code, Configuration Management, and orchestration are toys for the Ops. The build system and Continuous Integration pipeline might be split between the two or it might belong to either party. Can you see how the common ground is stretched thin?

make changes things, allowing for broader collaboration. Since it serves the purposes of both Dev and Ops, it is a common ground. Everyone speaks its language and everyone can contribute. But because it is so easy to use even when you want to do complex things (as in our example above) the true power of DevOps is given to the hands of every person on the team. Everyone can run make test and everyone can modify its rules and dependencies. Everyone could run make infrastructure and provision a nice cluster for development or for production. After all, they are documented in the same code!

Of course, when there’s a common ground it’s good to make sure whose responsible for which part. The last thing you want is people from Dev and Ops overwriting each other’s work! But great teamwork always relies on great communication, so this could happen with or without make.

So it doesn’t matter if you use any of the trendy technologies associated with DevOps. You may not need and not want any Docker, Cloud, Terraform or Travis. You can write desktop applications, for what its worth, and a carefully written Makefile would still be a DevOps enabler.

This post was originally published on this site


High score, low pay: why the gig economy loves gamification



Illustration: Alamy/Guardian Design Team

Using ratings, competitions and bonuses to incentivise workers isn’t new – but as I found when I became a Lyft driver, the gig economy is taking it to another level. By


Main image:
Illustration: Alamy/Guardian Design Team

In May 2016, after months of failing to find a traditional job, I began driving for the ride-hailing company Lyft. I was enticed by an online advertisement that promised new drivers in the Los Angeles area a $500 “sign-up bonus” after completing their first 75 rides. The calculation was simple: I had a car and I needed the money. So, I clicked the link, filled out the application, and, when prompted, drove to the nearest Pep Boys for a vehicle inspection. I received my flamingo-pink Lyft emblems almost immediately and, within a few days, I was on the road.

Initially, I told myself that this sort of gig work was preferable to the nine-to-five grind. It would be temporary, I thought. Plus, I needed to enrol in a statistics class and finish my graduate school applications – tasks that felt impossible while working in a full-time desk job with an hour-long commute. But within months of taking on this readily available, yet strangely precarious form of work, I was weirdly drawn in.

Lyft, which launched in 2012 as Zimride before changing its name a year later, is a car service similar to Uber, which operates in about 300 US cities and expanded to Canada (though so far just in one province, Ontario) last year. Every week, it sends its drivers a personalised “Weekly Feedback Summary”. This includes passenger comments from the previous week’s rides and a freshly calculated driver rating. It also contains a bar graph showing how a driver’s current rating “stacks up” against previous weeks, and tells them whether they have been “flagged” for cleanliness, friendliness, navigation or safety.

At first, I looked forward to my summaries; for the most part, they were a welcome boost to my self-esteem. My rating consistently fluctuated between 4.89 stars and 4.96 stars, and the comments said things like: “Good driver, positive attitude” and “Thanks for getting me to the airport on time!!” There was the occasional critique, such as “She weird”, or just “Attitude”, but overall, the comments served as a kind of positive reinforcement mechanism. I felt good knowing that I was helping people and that people liked me.

But one week, after completing what felt like a million rides, I opened my feedback summary to discover that my rating had plummeted from a 4.91 (“Awesome”) to a 4.79 (“OK”), without comment. Stunned, I combed through my ride history trying to recall any unusual interactions or disgruntled passengers. Nothing. What happened? What did I do? I felt sick to my stomach.

Because driver ratings are calculated using your last 100 passenger reviews, one logical solution is to crowd out the old, bad ratings with new, presumably better ratings as fast as humanly possible. And that is exactly what I did.

For the next several weeks, I deliberately avoided opening my feedback summaries. I stocked my vehicle with water bottles, breakfast bars and miscellaneous mini candies to inspire riders to smash that fifth star. I developed a borderline-obsessive vacuuming habit and upped my car-wash game from twice a week to every other day. I experimented with different air-fresheners and radio stations. I drove and I drove and I drove.


The language of choice, freedom, and autonomy saturate discussions of ride hailing. “On-demand companies are pointing the way to a more promising future, where people have more freedom to choose when and where they work,” Travis Kalanick, the founder and former CEO of Uber, wrote in October 2015. “Put simply,” he continued, “the future of work is about independence and flexibility.”

In a certain sense, Kalanick is right. Unlike employees in a spatially fixed worksite (the factory, the office, the distribution centre), rideshare drivers are technically free to choose when they work, where they work and for how long. They are liberated from the constraining rhythms of conventional employment or shift work. But that apparent freedom poses a unique challenge to the platforms’ need to provide reliable, “on demand” service to their riders – and so a driver’s freedom has to be aggressively, if subtly, managed. One of the main ways these companies have sought to do this is through the use of gamification.

<!–[if IE 9]><![endif]–>A driver displaying Lyft and Uber stickers on his windscreen in Los Angeles.


A driver working for Lyft and Uber in Los Angeles. Photograph: Richard Vogel/AP

Simply defined, gamification is the use of game elements – point-scoring, levels, competition with others, measurable evidence of accomplishment, ratings and rules of play – in non-game contexts. Games deliver an instantaneous, visceral experience of success and reward, and they are increasingly used in the workplace to promote emotional engagement with the work process, to increase workers’ psychological investment in completing otherwise uninspiring tasks, and to influence, or “nudge”, workers’ behaviour. This is what my weekly feedback summary, my starred ratings and other gamified features of the Lyft app did.

There is a growing body of evidence to suggest that gamifying business operations has real, quantifiable effects. Target, the US-based retail giant, reports that gamifying its in-store checkout process has resulted in lower customer wait times and shorter lines. During checkout, a cashier’s screen flashes green if items are scanned at an “optimum rate”. If the cashier goes too slowly, the screen flashes red. Scores are logged and cashiers are expected to maintain an 88% green rating. In online communities for Target employees, cashiers compare scores, share techniques, and bemoan the game’s most challenging obstacles.

But colour-coding checkout screens is a pretty rudimental kind of gamification. In the world of ride-hailing work, where almost the entirety of one’s activity is prompted and guided by screen – and where everything can be measured, logged and analysed – there are few limitations on what can be gamified.


In 1974, Michael Burawoy, a doctoral student in sociology at the University of Chicago and a self-described Marxist, began working as a miscellaneous machine operator in the engine division of Allied Corporation, a large manufacturer of agricultural equipment. He was attempting to answer the following question: why do workers work as hard as they do?

In Marx’s time, the answer to this question was simple: coercion. Workers had no protections and could be fired at will for failing to fulfil their quotas. One’s ability to obtain a subsistence wage was directly tied to the amount of effort one applied to the work process. However, in the early 20th century, with the emergence of labour protections, the elimination of the piece-rate pay system, the rise of strong industrial unions and a more robust social safety net, the coercive power of employers waned.

Yet workers continued to work hard, Burawoy observed. They co-operated with speed-ups and exceeded production targets. They took on extra tasks and sought out productive ways to use their downtime. They worked overtime and off the clock. They kissed ass. After 10 months at Allied, Burawoy concluded that workers were willingly and even enthusiastically consenting to their own exploitation. What could explain this? One answer, Burawoy suggested, was “the game”.

For Burawoy, the game described the way in which workers manipulated the production process in order to reap various material and immaterial rewards. When workers were successful at this manipulation, they were said to be “making out”. Like the levels of a video game, operators needed to overcome a series of consecutive challenges in order to make out and beat the game.

At the beginning of every shift, operators encountered their first challenge: securing the most lucrative assignment from the “scheduling man”, the person responsible for doling out workers’ daily tasks. Their next challenge was a trip to “the crib” to find the blueprint and tooling needed to perform the operation. If the crib attendant was slow to dispense the necessary blueprints, tools and fixtures, operators could lose a considerable amount of time that would otherwise go towards making or beating their quota. (Burawoy won the cooperation of the crib attendant by gifting him a Christmas ham.) After facing off against the truckers, who were responsible for bringing stock to the machine, and the inspectors, who were responsible for enforcing the specifications of the blueprint, the operator was finally left alone with his machine to battle it out against the clock.

<!–[if IE 9]><![endif]–>A Lyft promotion using a Back to the Future-style DeLorean car in New York in 2015.


A Lyft promotion using a Back to the Future-style DeLorean car in New York in 2015. Photograph: Lucas Jackson/Reuters

According to Burawoy, production at Allied was deliberately organised by management to encourage workers to play the game. When work took the form of a game, Burawoy observed, something interesting happened: workers’ primary source of conflict was no longer with the boss. Instead, tensions were dispersed between workers (the scheduling man, the truckers, the inspectors), between operators and their machines, and between operators and their own physical limitations (their stamina, precision of movement, focus).

The battle to beat the quota also transformed a monotonous, soul-crushing job into an exciting outlet for workers to exercise their creativity, speed and skill. Workers attached notions of status and prestige to their output, and the game presented them with a series of choices throughout the day, affording them a sense of relative autonomy and control. It tapped into a worker’s desire for self-determination and self-expression. Then, it directed that desire towards the production of profit for their employer.


Every Sunday morning, I receive an algorithmically generated “challenge” from Lyft that goes something like this: “Complete 34 rides between the hours of 5am on Monday and 5am on Sunday to receive a $63 bonus.” I scroll down, concerned about the declining value of my bonuses, which once hovered around $100-$220 per week, but have now dropped to less than half that.

“Click here to accept this challenge.” I tap the screen to accept. Now, whenever I log into driver mode, a stat meter will appear showing my progress: only 21 more rides before I hit my first bonus. Lyft does not disclose how its weekly ride challenges are generated, but the value seems to vary according to anticipated demand and driver behaviour. The higher the anticipated demand, the higher the value of my bonus. The more I hit my bonus targets or ride quotas, the higher subsequent targets will be. Sometimes, if it has been a while since I have logged on, I will be offered an uncharacteristically lucrative bonus, north of $100, though it has been happening less and less of late.

Behavioural scientists and video game designers are well aware that tasks are likely to be completed faster and with greater enthusiasm if one can visualise them as part of a progression towards a larger, pre-established goal. The Lyft stat meter is always present, always showing you what your acceptance rating is, how many rides you have completed, how far you have to go to reach your goal.

In addition to enticing drivers to show up when and where demand hits, one of the main goals of this gamification is worker retention. According to Uber, 50% of drivers stop using the application within their first two months, and a recent report from the Institute of Transportation Studies at the University of California in Davis suggests that just 4% of ride-hail drivers make it past their first year.

Retention is a problem in large part because the economics of driving are so bad. Researchers have struggled to establish exactly how much money drivers make, but with the release of two recent reports, one from the Economic Policy Institute and one from MIT, a consensus on driver pay seems to be emerging: drivers make, on average, between $9.21 (£7.17) and $10.87 (£8.46) per hour. What these findings confirm is what many of us in the game already know: in most major US cities, drivers are pulling in wages that fall below local minimum-wage requirements. According to an internal slide deck obtained by the New York Times, Uber actually identifies McDonald’s as its biggest competition in attracting new drivers. When I began driving for Lyft, I made the same calculation most drivers make: it is better to make $9 per hour than to make nothing.

Before Lyft rolled out weekly ride challenges, there was the “Power Driver Bonus”, a weekly challenge that required drivers to complete a set number of regular rides. I sometimes worked more than 50 hours per week trying to secure my PDB, which often meant driving in unsafe conditions, at irregular hours and accepting nearly every ride request, including those that felt potentially dangerous (I am thinking specifically of an extremely drunk and visibly agitated late-night passenger).

Of course, this was largely motivated by a real need for a boost in my weekly earnings. But, in addition to a hope that I would somehow transcend Lyft’s crappy economics, the intensity with which I pursued my PDBs was also the result of what Burawoy observed four decades ago: a bizarre desire to beat the game.


Drivers’ per-mile earnings are supplemented by a number of rewards, both material and immaterial. Uber drivers can earn “Achievement Badges” for completing a certain number of five-star rides and “Excellent Service Badges” for leaving customers satisfied. Lyft’s “Accelerate Rewards” programme encourages drivers to level up by completing a certain number of rides per month in order to unlock special rewards like fuel discounts from Shell (gold level) and free roadside assistance (platinum level).

In addition to offering meaningless badges and meagre savings at the pump, ride-hailing companies have also adopted some of the same design elements used by gambling firms to promote addictive behaviour among slot-machine users. One of things the anthropologist and NYU media studies professor Natasha Dow Schüll found during a decade-long study of machine gamblers in Las Vegas is that casinos use networked slot machines that allow them to surveil, track and analyse the behaviour of individual gamblers in real time – just as ride-hailing apps do. This means that casinos can “triangulate any given gambler’s player data with her demographic data, piecing together a profile that can be used to customise game offerings and marketing appeals specifically for her”. Like these customised game offerings, Lyft tells me that my weekly ride challenge has been “personalised just for you!”

Former Google “design ethicist” Tristan Harris has also described how the “pull-to-refresh” mechanism used in most social media feeds mimics the clever architecture of a slot machine: users never know when they are going to experience gratification – a dozen new likes or retweets – but they know that gratification will eventually come. This unpredictability is addictive: behavioural psychologists have long understood that gambling uses variable reinforcement schedules – unpredictable intervals of uncertainty, anticipation and feedback – to condition players into playing just one more round.

<!–[if IE 9]><![endif]–>A customer leaving a rating and review of an Uber driver.


A customer leaving a rating and review of an Uber driver. Photograph: Felix Clay for the Guardian

We are only beginning to uncover the extent to which these reinforcement schedules are built into ride-hailing apps. But one example is primetime or surge pricing. The phrase “chasing the pink” is used in online forums by Lyft drivers to refer to the tendency to drive towards “primetime” areas, denoted by pink-tinted heat maps in the app, which signify increased fares at precise locations. This is irrational because the likelihood of catching a good primetime fare is slim, and primetime is extremely unpredictable. The pink appears and disappears, moving from one location to the next, sometimes in a matter of minutes. Lyft and Uber have to dole out just enough of these higher-paid periods to keep people driving to the areas where they predict drivers will be needed. And occasionally – cherry, cherry, cherry – it works: after the Rose Bowl parade last year, I made in 40 minutes more than half of what I usually make in a whole day of non-stop shuttling.

It is not uncommon to hear ride-hailing drivers compare even the mundane act of operating their vehicles to the immersive and addictive experience of playing a video game or a slot machine. In an article published by the Financial Times, long-time driver Herb Croakley put it perfectly: “It gets to a point where the app sort of takes over your motor functions in a way. It becomes almost like a hypnotic experience. You can talk to drivers and you’ll hear them say things like, I just drove a bunch of Uber pools for two hours, I probably picked up 30–40 people and I have no idea where I went. In that state, they are literally just listening to the sounds [of the driver’s apps]. Stopping when they said stop, pick up when they say pick up, turn when they say turn. You get into a rhythm of that, and you begin to feel almost like an android.”


So, who sets the rules for all these games? It is 12.30am on a Friday night and the “Lyft drivers lounge”, a closed Facebook group for active drivers, is divided. The debate began, as many do, with an assertion about the algorithm. “The algorithm” refers to the opaque and often unpredictable system of automated, “data-driven” management employed by ride-hailing companies to dispatch drivers, match riders into Pools (Uber) or Lines (Lyft), and generate “surge” or “primetime” fares, also known as “dynamic pricing”.

The algorithm is at the heart of the ride-hailing game, and of the coercion that the game conceals. In their foundational text Algorithmic Labor and Information Asymmetries: A Case Study of Uber’s Drivers, Alex Rosenblat and Luke Stark write: “Uber’s self-proclaimed role as a connective intermediary belies the important employment structures and hierarchies that emerge through its software and interface design.” “Algorithmic management” is the term Rosenblat and Stark use to describe the mechanisms through which Uber and Lyft drivers are directed. To be clear, there is no singular algorithm. Rather, there are a number of algorithms operating and interacting with one another at any given moment. Taken together, they produce a seamless system of automatic decision-making that requires very little human intervention.

For many on-demand platforms, algorithmic management has completely replaced the decision-making roles previously occupied by shift supervisors, foremen and middle- to upper- level management. Uber actually refers to its algorithms as “decision engines”. These “decision engines” track, log and crunch millions of metrics every day, from ride frequency to the harshness with which individual drivers brake. It then uses these analytics to deliver gamified prompts perfectly matched to drivers’ data profiles.

Because the logic of the algorithm is largely unknown and constantly changing, drivers are left to speculate about what it is doing and why. Such speculation is a regular topic of conversation in online forums, where drivers post screengrabs of nonsensical ride requests and compare increasingly lacklustre, algorithmically generated bonus opportunities. It is not uncommon for drivers to accuse ride-hailing companies of programming their algorithms to favour the interests of the corporation. To resolve this alleged favouritism, drivers routinely hypothesise and experiment with ways to manipulate or “game” the system back.

When the bars let out after last orders at 2am, demand spikes. Drivers have a greater likelihood of scoring “surge” or “primetime” fares. There are no guarantees, but it is why we are all out there. To increase the prospect of surge pricing, drivers in online forums regularly propose deliberate, coordinated, mass “log-offs” with the expectation that a sudden drop in available drivers will “trick” the algorithm into generating higher surges. I have never seen one work, but the authors of a recently published paper say that mass log-offs are occasionally successful.

Viewed from another angle, though, mass log-offs can be understood as good, old-fashioned work stoppages. The temporary and purposeful cessation of work as a form of protest is the core of strike action, and remains the sharpest weapon workers have to fight exploitation. But the ability to log-off en masse has not assumed a particularly emancipatory function. Burawoy’s insights might tell us why.

Gaming the game, Burawoy observed, allowed workers to assert some limited control over the labour process, and to “make out” as a result. In turn, that win had the effect of reproducing the players’ commitment to playing, and their consent to the rules of the game. When players were unsuccessful, their dissatisfaction was directed at the game’s obstacles, not at the capitalist class, which sets the rules. The inbuilt antagonism between the player and the game replaces, in the mind of the worker, the deeper antagonism between boss and worker. Learning how to operate cleverly within the game’s parameters becomes the only imaginable option. And now there is another layer interposed between labour and capital: the algorithm.


After weeks of driving like a maniac in order to restore my higher-than-average driver rating, I managed to raise it back up to a 4.93. Although it felt great, it is almost shameful and astonishing to admit that one’s rating, so long as it stays above 4.6, has no actual bearing on anything other than your sense of self-worth. You do not receive a weekly bonus for being a highly rated driver. Your rate of pay does not increase for being a highly rated driver. In fact, I was losing money trying to flatter customers with candy and keep my car scrupulously clean. And yet, I wanted to be a highly rated driver.

And this is the thing that is so brilliant and awful about the gamification of Lyft and Uber: it preys on our desire to be of service, to be liked, to be good. On weeks that I am rated highly, I am more motivated to drive. On weeks that I am rated poorly, I am more motivated to drive. It works on me, even though I know better. To date, I have completed more than 2,200 rides.

A longer version of this article first appeared in Logic, a new magazine devoted to deepening the discourse around technology

Follow the Long Read on Twitter at @gdnlongread, or sign up to the long read weekly email here.

This post was originally published on this site

President Donald Trump and Chinese President Xi Jinping shake hands during a joint statement to members of the media Great Hall of the People in Beijing, China earlier this month. (AP Photo/Andrew Harnik)

November 19 at 5:50 PM

Amazon, Apple, Google, IBM and their peers could be subject to new restrictions on how they export the technology behind voice-activated smartphones, self-driving cars and speedy supercomputers to China under a proposal floated Monday by the Trump administration.

For the U.S. government, its pursuit of new regulations marks a heightened effort to ensure that emerging technologies, including artificial intelligence, don’t fall into the hands of countries or actors that might pose a national security threat.

The official request for public comment, published in the Federal Register, asks whether a long list of AI tools should be subject to stricter export-control rules. The Trump administration’s potential targets include image-recognition software, ultrafast quantum computers, advanced computer chips, self-driving cars and robots. Companies that make those products and services, for instance, might have to obtain licenses before selling them to foreign governments or partnering with some researchers in certain countries.

The document itself is only an initial notice of rules to come, and the Commerce Department, which is leading the effort, is still deciding how to proceed. But its broad wording, along with the White House’s long-running, high-stakes trade rift with Beijing, left some tech experts fearful that it could result in greater market barriers for companies doing business in China. They also worry that any rule could adversely affect U.S. investment and research in AI.

“If you think about the range of products this potentially implicates, that’s massive. This is either the opening of a big negotiation with the industry and the public, or a bit of a cry for help in scoping these regulations,” said David Edelman, the director of the Project on Technology, the Economy, & National Security at the Massachusetts Institute of Technology.

“We’ve seen the China trade war is a political football,” said Edelman, a former top tech aide to President Obama, adding: “Surprises are not off the table here.”

The Commerce Department explicitly said it hopes to protect national security without “hampering the ability of the U.S. commercial sector to keep pace with international advances.” The rules come at the request of Congress, which authorized them as part of a recently passed defense bill. On Thursday, the agency said it is “committed to laying the proper foundation to ensure that those technologies critical to national security remain protected to ensure the safety of our country.”

Spokespeople for Apple, Google and IBM also did not respond to requests, and Amazon declined to comment.

In its proposal, the Trump administration said it is focused “at a minimum” on countries already subject to a U.S. arms embargo, a category that includes China. The potential limits on AI research and exports reflect a growing apprehension with Beijing’s investment in that industry. In December 2017, for example, the White House’s official National Security Strategy warned that “risks to U.S. national security will grow as competitors integrate information derived from personal and commercial sources with intelligence collection and data analytic capabilities based on artificial intelligence and machine learning.”

What the Trump administration is “trying to control is the flow of know-how or research or development that enables” high-tech capabilities, said Melissa Duffy, an export-control expert and partner at the law firm Dechert LLP.

Some in the tech industry worry the regulations could widen an existing trade rift between Washington and Beijing. Many executives at big tech companies have bristled at White House-backed tariffs on China, fearing that Beijing’s retaliation could result in higher costs for them — or higher prices on components for their products. Others simply don’t want to anger Beijing, which controls one of the world’s largest markets for tech devices.

To that end, Silicon Valley’s lobbyists have been scrambling to digest what the new proposal would mean for their industry at a time when U.S. firms are racing their Chinese counterparts to dominate the market for AI.

“We view this as a really important process that our industry is taking very seriously and plan to engage in, because the outcomes are of great consequence for us,” said Christian Troncoso, a policy director at BSA | The Software Alliance, a Washington-based trade group for companies including Apple, Microsoft, IBM and Oracle.

This post was originally published on this site

By Tobias Adrian, Fabio Natalucci, and Thomas Piontek

November 15, 2018

A drilling crew member raises a pipe on an oil rig in Texas: Energy is among the industries in which leveraged lending is most prevalent, along with telecommunications, health care, and technology (photo: Nick Oxford/Reuters/Newscom)

We warned in the most recent Global Financial Stability Report that speculative excesses in some financial markets may be approaching a threatening level. For evidence, look no further than the $1.3 trillion global market for so-called leverage loans, which has some analysts and academics sounding the alarm on a dangerous deterioration in lending standards. They have a point.

This growing segment of the financial world involves loans, usually arranged by a syndicate of banks, to companies that are heavily indebted or have weak credit ratings. These loans are called “leveraged” because the ratio of the borrower’s debt to assets or earnings significantly exceeds industry norms.

With interest rates extremely low for years and with ample money flowing though the financial system, yield-hungry investors are tolerating ever-higher levels of risk and betting on financial instruments that, in less speculative times, they might sensibly shun.

For their part, speculative-grade companies have been eager to load up on cheap debt. Globally, new issuance of leveraged loans hit a record $788 billion in 2017, surpassing the pre-crisis high of $762 billion in 2007. The United States was by far the largest market last year, accounting for $564 billion of new loans.

Underwriting standards and credit quality have deteriorated.

So far this year, issuance has reached an annual rate of $745 billion. More than half of this year’s total involves money borrowed to fund mergers and acquisitions and leveraged buyouts (LBOs), pay dividends, and buy back shares from investor—in other words, for financial risk-taking rather than plain-vanilla productive investment. Most borrowers are technology, energy, telecommunications, and health care firms.

At this late stage of the credit cycle, with signs reminiscent of past episodes of excess, it’s vital to ask: How vulnerable is the leveraged-loan market to a sudden shift in investor risk appetite?  If this market froze, what would be the economic impact? In a worst-cast scenario, could a breakdown threaten financial stability?

It is not only the sheer volume of debt that is causing concern. Underwriting standards and credit quality have deteriorated. In the United States, the most highly indebted speculative grade firms now account for a larger share of new issuance than before the crisis. New deals also include fewer investor protections, known as covenants, and lower loss-absorption capacity. This year, so-called covenant-lite loans account for up 80 percent of new loans arranged for nonbank lenders (so-called “institutional investors”), up from about 30 percent in 2007. Not only the number, but also the quality of covenants has deteriorated. 

Furthermore, strong investor demand has resulted in a loosening of nonprice terms, which are more difficult to monitor. For example, weaker covenants have reportedly allowed borrowers to inflate projections of earnings. They have also allowed them to borrow more after the closing of the deal. With rising leverage, weakening investor protections, and eroding debt cushions, average recovery rates for defaulted loans have fallen to 69 percent from the pre-crisis average of 82 percent. A sharp rise in defaults could have a large negative impact on the real economy given the importance of leveraged loans as a source of corporate funding.

A significant shift in the investor base is another reason for worry. Institutions now hold about $1.1 trillion of leveraged loans in the United States, almost double the pre-crisis level.  That compares with $1.2 trillion in high yield, or junk bonds, outstanding. Such institutions include loan mutual funds, insurance companies, pension funds, and collateralized loan obligations (CLOs), which package loans and then resell them to still other investors. CLOs buy more than half of overall leveraged loan issuance in the United States. Mutual funds that invest in leveraged loans have grown from roughly $20 billion in assets in 2006 to about $200 billion this year, accounting for over 20 percent of loans outstanding. Institutional ownership makes it harder for banking regulators to address potential risk to the financial system if things go wrong.  

Regulators in the United States and Europe have taken steps in recent years to reduce banks’ exposures and to curb market excesses more broadly. The effectiveness of these steps, however, remains an open question. For example, there is evidence that these actions have contributed to a shift of activities from banks to institutional investors. These investors have different risk profiles and may pose different risks to the financial system than banks.

While banks have become safer since the financial crisis, it is unclear whether institutional investors retain a link to the banking sector, which could inflict losses at banks during market disruptions. Furthermore, few tools are available to address credit and liquidity risks in global capital markets. So it is crucial for policymakers to develop and deploy new tools to address deteriorating underwriting standards. Having learned a painful lesson a decade ago about unforeseen threats to the financial system, policymakers should not overlook another potential threat.

Related links:
The Financial System Is Stronger, but New Vulnerabilities Have Emerged in the Decade Since the Crisis
A Decade After Lehman, the Financial System Is Safer. Now We Must  Avoid Reform Fatigue
Chart of the Week: When High Yield Goes Boom
This post was originally published on this site

If you are trying to test logic that relies on the current time you have probably run into at least one of these issues:

  1. Time is always moving. Predicating a result, such as a hash may be difficult or impossible.

  2. Tests become brittle. If your production code or test needs to refer to the current time it may read before/after the second ticks over. This means that every so often the test will fail. I have also seen tests that were passing for months that suddenly start failing because of the change in daylight saving time.
  3. Some tests need to verify if something specific has or has not changed when the some period of time passes. The easiest choice is to add a sleep() to your test, but that really slows down your tests.

There are some less than desirable ways around this. However, it really comes back the the principle that your output should be deterministic from your input. Dependency injection is a great way to control this and create testable code. The system clock is actually just a dependency like any other.

That is to say that the logic shouldn’t need to know where it’s getting the current time from so long as it can get it.

Jumping right into an example:

class TimeFormatter

{

public function whatTimeIsIt()

{

return (new DateTime())->format(DateTimeInterface::ISO8601);

}

}

Obviously the test would fail:

class TimeFormatterTest

{

public function testWhatTimeIsIt()

{

$timeFormatter = new TimeFormatter();

$actual = $timeFormatter->whatTimeIsIt();

$this->assertEquals("2018-08-15T15:52:01+0000", $actual);

sleep(60);

$actual = $timeFormatter->whatTimeIsIt();

$this->assertEquals("2018-08-15T15:53:01+0000", $actual);

}

}

Let’s consider the clock as an interface:

interface ClockInterface

{

/**

* Get the current time.

*/

public function now(): DateTime;

/**

* Sleep for a fixed number of whole seconds.

*

* This can be used as a substitution of sleep().

*/

public function sleep(int $seconds): void;

}

Now updating our class to abstract away the clock as a dependency:

class TimeFormatter

{

private $clock;

public function __construct(ClockInterface $clock)

{

$this->clock = $clock;

}

public function whatsTheTime()

{

return $this->clock->now()->format(DateTimeInterface::ISO8601);

}

}

This may look a bit alien at first. However, remember that the clock is still actually a dependency, even if you are not used to handling it in this way.

Now the test looks like this:

class TimeFormatterTest

{

public function testWhatTimeIsIt()

{

$clock = new FakeClock();

$timeFormatter = new TimeFormatter($clock);

$actual = $timeFormatter->whatTimeIsIt();

$this->assertEquals("2018-08-15T15:52:01+0000", $actual);

$clock->sleep(60);

$actual = $timeFormatter->whatTimeIsIt();

$this->assertEquals("2018-08-15T15:53:01+0000", $actual);

}

}

Where did FakeClock come from? Well here it is:

/**

* FakeClock is used for testing.

*

* An instance will always have a starting value of

* "Wed, 04 Apr 1984 00:00:00 +0000".

*/

class FakeClock implements ClockInterface

{

private $time;

public function __construct()

{

// 449884800 = "Wed, 04 Apr 1984 00:00:00 +0000"

$this->time = DateTime::createFromFormat('U', 449884800);

}

public function now(): DateTime

{

return $this->time;

}

public function sleep(int $seconds): void

{

$this->time->add(new DateInterval("PT{$seconds}S"));

}

}

Great. Not only have we made the clock predictable, but sleep is now instantaneous.

In production code we can use the RealClock:

class RealClock implements ClockInterface

{

public function now(): DateTime

{

return new DateTime();

}

public function sleep(int $seconds): void

{

sleep($seconds);

}

}


The interface and implementation were inspired from jonboulle/clockwork which I have used with Go.
This post was originally published on this site

The VLSI Symposium is a conference that brings together two typically separate focus areas: semiconductor manufacturing, and circuit design. VLSI rotates between Hawaii and Kyoto – the audience is smaller than either IEDM or ISSCC, but the location draws more attendees from Asia. I saw three major themes running through VLSI this year: machine learning as exemplified by an IBM accelerator, novel memories (e.g., Magneto-resistive memory or MRAM), and new but incremental logic process technologies. Rather than write a larger article covering all of them, I am writing a series of shorter and more informal articles delving into a few different papers and topics.

The second major area of interest at the VLSI Symposium was novel memory technologies. Historically, most companies have sat on the sidelines with only modest efforts in emerging memories. The payoff for a new memory is potentially huge, but the required investment is correspondingly daunting. The three major memory types – SRAM, DRAM, and NAND – are so mature and well-optimized; most companies realize that sets a high hurdle. However, the debut of Intel and Micron’s 3DXP has woken the industry up and is acting as a catalyst for investment in other memory types.

MRAM is an emerging memory that is already in production. A full description of the technology is beyond the scope of this article. Briefly, the memory works by manipulating magnetic materials to modulate the resistance of a magnetic tunnel junction (or MTJ) cell. The MTJs are non-volatile, use CMOS-compatible voltages, and are formed in the metal layers without disturbing the critical transistor formation process flows. MRAM is denser than SRAM, but the memory cells are about 50% larger than comparable eDRAM. Moreover, MRAM appears to scale very nicely to smaller feature sizes whereas SRAM scaling has slowed in recent years; some companies hope that one day MRAM can replace SRAM for large arrays. MRAM is also much faster than flash, with latencies close to DRAM.

Everspin Technologies offers several generations of MRAM-based discrete memory products using standard DDR3 and DDR4 interfaces. Other companies are interested in MRAM as a memory that is embedded in a modern logic process technology. However, MRAM comes with its own share of challenges. Two interesting papers at the VLSI Symposium described some of these problems and proposed potential solutions.

TDK Focuses on New Materials

A first paper from TDK describes a new STT-MRAM cell design that the company claims is attractive for sub-10nm logic process technologies. The goal was to achieve a bit-cell that is compact, operates at low-voltage, and is sufficiently reliable with low write errors.

Generally, the switching voltage of a STT-MRAM bit-cell is determined by the product of the resistance and the area of the MTJ (described as the RA). Using new materials, the team created a small MTJ with a 30nm diameter and lower (but undisclosed) RA that can switch at less than 300mV. This should enable integration on a chip that operates at roughly 0.5V. The smaller MTJ also requires less energy (0.12pJ) to switch, and can be written using either a 10ns pulse that is 50μA or a 20ns pulse that is 35μA. For the shorter write pulse at 270mV, the team demonstrated that the write error rate was under 1-9.

TSMC Designs for Easier Reading

A second paper from TSMC addressed another problem, using an entirely different set of techniques. One of the drawbacks of MTJs is the small read window; the difference between high- and low-resistance states in an MTJ is typically 2-3X. By way of comparison, Intel’s 22FFL process features fast transistors that leak 10nA/μm and achieve drive currents of 1.22mA/μm – a difference of more than 100,000X. As a result, sensing the value of an MTJ bit-cell is much more difficult than an SRAM bit-cell.

Unfortunately, building an MRAM array compounds this problem further. To build a compact and efficient memory array, many small MTJ cells are formed in fine-pitch interconnect layers and connected on a shared bitline. The more cells on the bitline, the greater the variation between the near and far cells. Similarly, a good array will share other wiring (sourceline and wordline) between multiple cells to better amortize the area overhead. To achieve good density, the array should ideally use fine-pitch metal layers for the wiring, but the smaller wires are more susceptible to variation. Each of these cumulative sources of variation eats away at the already small read window.

The TSMC team turned to a more complicated sensing circuit design to overcome the small read window for a 16Mbit array implemented in a 40nm CMOS process. The array is organized into long bitlines that span 1,024 cells and wordlines that span 8 cells. The reference cells are put in series with a resistor that is tuned to precisely shift the reference cells into the center of the read window. TSMC employs current-mode latch sense amplifiers that are enhanced with calibration and trimming for each sense amplifier to mitigate process variations. In addition, the sense amplifiers use the trim values to compensate for the difference in current from the top half and bottom half of the long bitlines, increasing the read window by 20%. The calibration and trimming is performed during manufacturing.

Overall, the 16Mbit array consumed 3.03mm2 in TSMC’s 40nm logic process and the company reported a 17.5ns access time using a 1.1V Vdd over a -40C-125C temperature range.

Analysis

Both the TDK and TSMC papers highlight challenges to MRAM adoption and offer potential solutions. The two teams take strikingly different approaches. TDK favors material science and manufacturing techniques to improve the underlying MTJs with a focus on scaling down the bitcell and improving the write behavior. The data is not for full arrays and some characteristics such as endurance cannot be extrapolated from a small sample of cells to an array or to high-volume manufacturing. The TSMC work aims to improve the read behavior of modest sized arrays with clever circuit design that mitigates the imperfections in the underlying media.

Neither paper describes a technology that is ready for high-volume manufacturing. But both teams highlight the low-hanging research opportunities that are available for an emerging memory such as MRAM. The two papers also illustrate the great commercial interest in developing MRAM. As Moore’s Law slows, semiconductor manufacturers must turn to new techniques to boost performance, and new memories, such as MRAM, could fill that void. Each emerging memory will require considerable research to pick the low-hanging fruit and find a commercially viable niche that will support high-volume manufacturing.

Discuss (3 comments)

This post was originally published on this site

An Open Letter to Hobbyists

To me, the most critical thing in the hobby market right now is the lack of good software courses, books and software itself. Without good software and an owner who understands programming, a hobby computer is wasted. Will quality software be written for the hobby market?

Almost a year ago, Paul Allen and myself, expecting the hobby market to expand, hired Monte Davidoff and developed Altair BASIC. Though the initial work took only two months, the three of us have spent most of the last year documenting, improving and adding features to BASIC. Now we have 4K, 8K, EXTENDED, ROM and DISK BASIC. The value of the computer time we have used exceeds $40,000.

The feedback we have gotten from the hundreds of people who say they are using BASIC has all been positive. Two surprising things are apparent, however, 1) Most of these “users” never bought BASIC (less than 10% of all Altair owners have bought BASIC), and 2) The amount of royalties we have received from sales to hobbyists makes the time spent on Altair BASIC worth less than $2 an hour.

Why is this? As the majority of hobbyists must be aware, most of you steal your software. Hardware must be paid for, but software is something to share. Who cares if the people who worked on it get paid?

Is this fair? One thing you don’t do by stealing software is get back at MITS for some problem you may have had. MITS doesn’t make money selling software. The royalty paid to us, the manual, the tape and the overhead make it a break-even operation. One thing you do do is prevent good software from being written. Who can afford to do professional work for nothing? What hobbyist can put 3-man years into programming, finding all bugs, documenting his product and distribute for free? The fact is, no one besides us has invested a lot of money in hobby software. We have written 6800 BASIC, and are writing 8080 APL and 6800 APL, but there is very little incentive to make this software available to hobbyists. Most directly, the thing you do is theft.

What about the guys who re-sell Altair BASIC, aren’t they making money on hobby software? Yes, but those who have been reported to us may lose in the end. They are the ones who give hobbyists a bad name, and should be kicked out of any club meeting they show up at.

I would appreciate letters from any one who wants to pay up, or has a suggestion or comment. Just write to me at 1180 Alvarado SE, #114, Albuquerque, New Mexico, 87108. Nothing would please me more than being able to hire ten programmers and deluge the hobby market with good software.

Bill Gates

General Partner, Micro-Soft

This post was originally published on this site

Amazon may have been expecting lots of public attention when it announced where it would establish its new headquarters – but like many technology companies recently, it probably didn’t anticipate how negative the response would be. In Amazon’s chosen territories of New York and Virginia, local politicians balked at taxpayer-funded enticements promised to the company. Journalists across the political spectrum panned the deals – and social media filled up with the voices of New Yorkers and Virginians pledging resistance.

Similarly, revelations that Facebook exploited anti-Semitic conspiracy theories to undermine its critics’ legitimacy indicate that instead of changing, Facebook would rather go on the offensive. Even as Amazon and Apple saw their stock-market values briefly top US$1 trillion, technology executives were dragged before Congress, struggled to coherently take a stance on hate speech, got caught covering up sexual misconduct and saw their own employees protesting business deals.

In some circles this is being seen as a loss of public trust in the technology firms that promised to remake the worldsocially, environmentally and politically – or at least as frustration with the way these companies have changed the world. But the technology companies need to do much more than regain the public’s trust; they need to prove that they deserved it in the first place – which, when placed in the context of the history of technology criticism and skepticism, they didn’t.

Looking away from the problems

Big technology companies used to frame their projects in vaguely utopian, positive-sounding lingo that obscures politics and public policy, transcending partisanship and – conveniently – avoiding scrutiny. Google used to remind its workers “Don’t be evil.” Facebook worked to “make the world more open and connected.” Who could object to those ideals?

Scholars warned about the dangers of platforms like these, long before many of their founders were even born. In 1970, social critic and historian of technology Lewis Mumford predicted that the goal of what he termed “computerdom” would be “to furnish and process an endless quantity of data, in order to expand the role and ensure the domination of the power system.” That same year a seminal essay by feminist thinker Jo Freeman warned about the inherent power imbalances that remained in systems that appeared to make everyone equal.

Similarly, in 1976, the computer scientist Joseph Weizenbaum predicted that in the decades ahead people would find themselves in a state of distress as they became increasingly reliant on opaque technical systems. Countless similar warnings have been issued ever since, including important recent scholarship such as information scholar Safiya Noble’s exploration of how Google searches replicate racial and gender biases and media scholar Siva Vaidhyanthan’s declaration that “the problem with Facebook is Facebook.”

The technology companies are powerful and wealthy, but their days of avoiding scrutiny may be ending. The American public seems to be starting to suspect that the technology giants were unprepared, and perhaps unwilling, to assume responsibility for the tools they unleashed upon the world.

In the aftermath of the 2016 U.S. presidential election, concern remains high that Russian and other foreign governments are using any available social media platform to sow discord and discontent in societies around the globe.

Facebook has still not solved the problems in data privacy and transparency that caused the Cambridge Analytica scandal. Twitter is the preferred megaphone for President Donald Trump and home to disturbing quantities of violent hate speech. The future of Amazon’s corporate offices is shaping up to be a multi-sided brawl among elected officials and the people they supposedly represent.

Is it ignorance or naivete?

Viewing the present situation with the history of critiques of technology in mind, it’s hard not to conclude that the technology companies deserve the crises they are facing. These companies ask people to entrust them with their emails, personal data, online search histories and financial information, to the point that many of these companies proudly tout that they know individuals better than they know themselves. They promote their latest systems, including “smart speakers” and “smart cameras,” seeking to ensure that users’ every waking moment – and sleeping moments too – can be monitored, feeding more data into their money-making algorithms.

Yet seemingly inevitably these companies go on to demonstrate how unworthy of trust they actually are, leaking data, sharing personal information and failing to prevent hacking, as they slowly fill the world with a disturbing techno-paranoia worthy of an episode of “Black Mirror.”

Technology firms’ responses to each new revelation fit a standard pattern: After a scandal emerges, the company involved expresses alarm that anything went wrong, promises to investigate, and pledges to do better in the future. Some time – days, weeks or even months – later, the company reveals that the scandal was a direct result of how the system was designed, and trots out a dismayed executive to express outrage at the destructive uses bad people found for their system, without admitting that the problem is the system itself.

Zuckerberg himself told the U.S. Senate in April 2018 that the Cambridge Analytica scandal had taught him “we have a responsibility to not just give people tools, but to make sure that those tools are used for good.” That’s a pretty fundamental lesson to have missed out on while creating a multi-billion-dollar company.

Rebuilding from what’s left

Using any technology – from a knife to a computer – carries risks, but as technological systems increase in size and complexity the scale of these risks tends to increase as well. A technology is only useful if people can use it safely, in ways where the benefits outweigh the dangers, and if they can feel confident that they understand, and accept, the potential risks. A couple of years ago, Facebook, Twitter and Google may have appeared to most people as benign communication methods that brought more to society than they took away. But with every new scandal, and bungled response, more and more people are seeing that these companies pose serious dangers to society.

As tempting as it may be to point to the “off” button, there’s not an easy solution. Technology giants have made themselves part of the fabric of daily life for hundreds of millions of people. Suggesting that people just quit is simple, but fails to recognize how reliant many people have become on these platforms – and how trapped they may feel in an increasingly intolerable situation.

As a result, people buy books about how bad Amazon is – by ordering them on Amazon. They conduct Google searches for articles about how much information Google knows about each individual user. They tweet about how much they hate Twitter and post on Facebook articles about Facebook’s latest scandal.

The technology companies may find themselves ruling over an increasingly aggravated user base, as their platforms spread the discontent farther and wider than possible in the past. Or they might choose to change themselves dramatically, breaking themselves up, turning some controls over to the democratic decisions of their users and taking responsibility for the harm their platforms and products have done to the world. So far, though, it seems the industry hasn’t gone beyond offering half-baked apologies while continuing to go about business as usual. Hopefully that will change. But if the past is any guide, it probably won’t.The Conversation

Note: I was asked to write this piece by The Conversation, it was published there under a Creative Commons license. You can read the original article on their site. I would like to thank Jeff Inglis for his excellent editorial work.

Related Content

Techlash! What Techlash?

Challenging the Tech Companies from Within

Be Wary of Silicon Valley’s Guilty Conscience

Ethical OS and Silicon Valley’s Guilty Conscience

An Island of Reason in the Cyberstream – on Joseph Weizenbaum

Advertisements

About TheLuddbrarian

“I have no illusions that my arguments will convince anyone.” – Ellul librarianshipwreck.wordpress.com @libshipwreck