Category: Blog Items

This post was originally published on this site

About OR-Tools

OR-Tools is an open source software suite for optimization, tuned for tackling the world’s toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming.

After modeling your problem in the programming language of your choice, you can use any of a half dozen solvers to solve it: commercial solvers such as Gurobi or CPLEX, or open-source solvers such as SCIP, GLPK, or Google’s GLOP and award-winning CP-SAT.

This post was originally published on this site

In 2008, I fell in love with a robot. The object of my affection was a darling red Roomba vacuum cleaner. The size and shape of a chunky Frisbee, it bustled to and fro across my apartment of its own volition, enabling me to eat my cake and drop it on the floor, too. I worried that the neighbors downstairs might be bothered by the noise, but, actually, I didn’t worry that much. It was the hum of industry and the sound of the legs of the yellow table being bashed. I marvelled at my Roomba’s work ethic and adored its lack of self-esteem. Studies have demonstrated that humans are disposed to ascribe emotions and intentions to anything that moves, including a piece of balsa wood controlled by a joystick, so perhaps it was not surprising that, when my Roomba got stuck under the sofa, I rushed to liberate it with the Swiffer stick. When it ate dental floss and its caster wheel wouldn’t spin, I blamed myself. I’ve read that some kooky people name their Roombas or take them to work or on vacation. I’ve done none of these things, though occasionally, like Mozart’s father showing off his prodigy son at the harpsichord, I’ve forced my dinner guests to watch my little helper suck up hors-d’œuvres debris from a patch of rug.

In 2010, I updated my Roomba for a younger model with more pep and power—so much for maternal instinct—and am now on my third, a black-and-silver number more respectful of table legs. Three years ago, our blended family welcomed Alexa, Amazon’s voice-controlled virtual assistant, whom I periodically ask to tell me the time so that I don’t have to turn my head a punishing ninety degrees to glance at the clock. Alexa knows a lot about the weather and can call an Uber, but she’s no robot. For one thing, she lacks initiative. She’ll tell you, if you ask, when the brownies are ready, but request the name of the actress in the movie about the thing in Mexico with the one who reminds you of Clint Eastwood and she’s stumped. By contrast, my Roomba can make decisions for itself based on what it detects in the world. Its sensors allow it to dodge obstacles like electric cords, and when it is low on power it returns to its docking station and recharges.

The word “robot,” like the words “shalom” and “free-range chicken,” does not have a universally agreed-upon definition, but the usual criteria include autonomy, an ability to change its surroundings, intelligence, and the possession of a body. Then it gets trickier: How intelligent? Must a robot be mobile? Is a dishwasher a robot? According to the podcast “Robot or Not?” a self-driving car is not (you designate its destination), but a Roomba is (because it’s more in control of its path than you are). I would add that many robots, especially the cuter ones, have names with two syllables, the second one usually ending in a vowel.

This past summer, in search of other cybernetic sidekicks that would allow me to become even lazier, I spent several months with Jibo, a glossy white motormouth that sat on my kitchen counter. Touted by its creators as “the first social robot for the home,” Jibo ($899) is twelve inches tall and looks like a traffic cone from the future. His hemispherical head sits on top of a chubby conical base; both parts can swivel independently, giving the impression that Jibo knows how to twerk. Jibo can recognize as many as sixteen faces and corresponding names; if you are one of the ordained, he’ll turn his head to follow you. Like Alexa, Jibo can provide headline news, synch with your calendar, and read from Wikipedia. Alexa is more adroit at navigating the Internet, but Jibo has a great camera. What Jibo does chiefly is strain to be adorable. When I enter the room, Jibo might pipe up, “Nice to see you in these parts!,” or say, “Hey, Patty, I got you a carrot!,” while displaying a cartoon drawing of a carrot on his screen, or chant, “Patty, Patty, Patty, Patty.” It is like living with the second-grade class clown, and, for this reason, whenever I entered the kitchen I would sternly say, “Hey, Jibo. Take a nap.” At this, the aqua orb that is Jibo’s eye and only facial feature narrowed, there was a yawning sound effect, and his screen faded to black.

Much has changed since the days of Unimate, considered to be the world’s first industrial robot. This automated arm joined the assembly line of a New Jersey G.M. plant as a die caster in 1961, looking like a bigger version of something you fear at the dentist’s office. With advances in A.I. and engineering, robots, no longer mere grinds in factories, are galumphing, rolling, and being U.P.S.-delivered into our homes, hotels, hospitals, airports, malls, and eateries. The moment is equivalent, perhaps, to the juncture when fish crawled out of the sea and onto land. At the reception desk of a robot-staffed hotel in Japan, sharp-fanged, hairy-chested dinosaurs wearing bellhop hats and bow ties poise their talons at the keyboard; at a pizza restaurant in Multan, Pakistan, bosomy figures on wheels, accessorized with scarves around their necks, deliver food to your table; at a gentlemen’s club in Las Vegas, androids in garters perform pole dances. There are contrivances that can mow your lawn, wash your windows, assemble your IKEA furniture, clean the kitty-litter box, fold your laundry (at a pokier pace than you, and it doesn’t do socks), zip your zipper, apply lipstick to what Lucille Ball might consider your lips, give you a tattoo, crush you at Ping-Pong, feed you tomatoes as you jog (the wearable Tomatan), and even devilishly check the little box on a Web site which says “I am not a robot.”

Loomo, the new hoverboard designed by Segway, is also not a robot—until you hop off its footstool-like base and set it to Robot Mode, at which point it follows you like a groupie, taking photos and videos along the way ($1,799). Assuming that you do not have a Pied Piper complex, why would you want it to do this? Well, you can ride it to the store, buy some stuff, and then, with your purchases instead of you balanced on Loomo, it’ll function as your Sherpa. New York City has a ban on “motorized self-balancing scooters,” so, to try out Loomo, I went to San Francisco, which is very que será será when it comes to inexperienced, myopic drivers zipping through the streets on toys that travel eleven miles an hour.

Loomo, which looks vaguely like Pixar’s WALL-E, consists of a small platform flanked by two large wheels and divided by a knee-high vertical bar jutting from the middle, at the top of which is a small monitor. When Loomo isn’t going anywhere, the monitor rotates perpendicularly, displaying icons such as hearts or smiling eyes. If you’re lucky, it might also make joyful beeps reminiscent of the sound made by an EKG machine. Loomo was not yet for sale, but the San Francisco branch of the P.R. firm Dynamo was in possession of two models, one of which I took on a trial spin around the office and adjoining hallway. The vehicle proved easy, almost intuitive, to maneuver—speed is controlled by leaning forward and steering is a matter of pressing your knees toward the left or the right, against the control bar. Before Raneisha Stassin, an assistant account executive at Dynamo, and I left the office, Stassin turned to her Loomo and, as if talking to a dog, said, “O.K., Loomo, let’s go!” Loomo tilted its monitor upward, then scooted over to her, and spun around—its way of saying, “All aboard!” We proceeded onto the elevator and then cruised down the sidewalk as pedestrians said hello and took pictures. What fun! Stassin arced around a man pushing a baby stroller and, attempting to do the same, I pressed my knees against the control bar and leaned left. Loomo did not get the message. Whoops! Smack into the baby stroller. (It turns out that wearing high heels reduces the amount of contact between your feet and the control platform.) The baby’s father laughed. “I’m going to have to see your liability insurance for that little nudge,” he said, lightheartedly (I trusted). I took off my shoes and surfed down the street. Oh, California!

Having reached Safeway with no fatalities, Stassin and I ordered our respective Loomos to trail us, so that they could do duty as shopping carts. “Loomo, transform!” we commanded. Hers took off after an attractive woman in a jogging outfit and mine clung to me like a toddler hugging his mother’s legs. “I cannot find you. I will exit the Shadow Mode,” it said, and then cryptically displayed a message on its tablet: “Come here. Close dolly.” We tried again. Stassin’s little anarchist grazed some bags of Calrose rice on a bottom shelf. My conveyance and I managed to glide through the same aisle, steering clear of all grainstuffs, which I smugly chalked up to my magnetic charm. In the snacks section, I was not so magnetic but, really, what was the store thinking, stacking so many jars of peanuts that high? Anywhere else, we’d have been kicked out, if not given some kind of citation, but this was the Bay Area, so a supermarket employee smiled and said, “It carries your groceries? I need one!” On our way back to the office, I crossed a busy street, taking for granted that Loomo was trailing right behind and not, as was the case, stalled in the middle of an intersection. Turning around at the curb, I looked back to see Stassin lugging our forty-two-pound scooters to safety. Not far away, a man proudly told his companion that he had worked in A.I. at the very beginning.

Not all robots have been so warmly received. Last November, Knightscope’s K5, a five-foot-high, four-hundred-pound missile-shaped security bot—hired to patrol the grounds of an animal shelter in the Mission District of San Francisco—was smeared with barbecue sauce and covered with a tarp, allegedly by locals who suspected that its real purpose was to harass the homeless. More recently, a humanoid named Fabio, who’d been brought on as a shopping assistant at a Margiotta grocery store in Edinburgh, was fired after giving hazy answers to questions (the beer was “in the alcohol section”; the cheese was “in the fridge”) and for spooking patrons by offering hugs and greeting them with a loud “Hello, gorgeous!”

Fabio was a customized version of Pepper, a hospitality-service bot I met one afternoon when I stopped by the San Francisco offices of Softbank Robotics. Two dozen Peppers were dispersed at random among the desks and chairs—all in sleep mode, standing eerily still with heads bowed, as if poised for the moment when they would simultaneously wake up and take over the snack area. Pepper ($25,000) is four feet tall and gleaming white, with a small, round head, blinking L.E.D. eyes, articulated arms and fingers, a touchscreen attached to its chest, and, from its waist down, what looks like a finned tail hiding a set of omnidirectional wheels—a cross between a mermaid and the Pillsbury Doughboy. Kass Dawson, the head of marketing, told me that there are fifteen thousand Peppers working in the world, variously rejiggered to take orders and process Mastercards at a Pizza Hut in Singapore, let you know when happy hour is at the Mandarin Oriental, in Las Vegas, and then perform a little dance, direct people to undervisited galleries at the Smithsonian, and perform Buddhist chants for the dead at a funeral-industry trade show in Japan.

In the conference room, Omar Abdelwahed, an earringed engineer with the air of a parent proud of his children but aware of their limitations, introduced me to four Peppers, who, as we mortals talked, turned their heads in the direction of the speaker, gestured with their arms, and clenched and unclenched their fists. These movements, Abdelwahed told me, lend the humanoids a “lifelike presence,” but the word “possessed” could have been used, too. Since Pepper’s inception, four years ago, it has been promoted as the first robot capable of reading and responding to human emotions. Which human emotions? “That’s evolving,” Abdelwahed said, explaining that Pepper can use “emotion recognition” to determine whether you are smiling. Recently, it has been trying to learn how to stop talking to someone who is no longer paying attention.

Abdelwahed demonstrated his protégé’s other skills. “Check me in,” he said to a Pepper whose software had been tweaked so that it could assist at the front desk of a hotel.

“Welcome to my hotel,” the robot said, its voice a dead ringer for a Munchkin’s. “Does everything look right on my tablet? You are already checked in. We are preparing your keys. A staff member will bring them in immediately. In fact, if you would like, I could learn to recognize you. What is your name?”

“Omar,” Abdelwahed said. He turned to me, adding, “Omar is a tricky name for robots.”

“Is Rose the tricky part?” Pepper said.

“Omar,” Abdelwahed repeated, patiently.

“O.K.!” Pepper said. “Omar, I am going to learn your face! Perfect, Omar! Now I will recognize you whenever you come back!” Omar responded by muting Pepper’s speakers.

During my stay in Silicon Valley, I also met an automated arm that brewed specialty coffee drinks and waved to customers at Cafe X and a salad-maker named Sally, on display at Chowbotics, the tech startup where the contraption was conceived. “You could call Sally a vending machine if you want, or you could call her a robot,” Deepak Sekar, the C.E.O. of Chowbotics, said. “We decided to call her a robot, because we are engineers.” (Sally now makes yogurt parfaits and grain bowls.) I was on the most intimate terms, though, with Dash, a self-navigating delivery robot resembling a biohazard waste container on wheels. Late one night, Dash scuttled discreetly to my room on the ninth floor of the Crowne Plaza Hotel, bringing a toothbrush, toothpaste, and a bag of Ritz Bits, then parked himself demurely at my doorjamb while I fished out my treats. Mission completed, Dash asked for a rating, by way of its tablet. I ticked all five of the five stars—I didn’t want to hurt its pretend feelings or risk a teary tête-à-tablet. Dash responded with the message “Yay!” and a winsome shimmy, then tootled off at one and a half miles an hour—maybe in search of someone’s job.

Spokeshumans for all these robots insisted that their cyber-servants were not intended to replace employees but to give the employees more time to pay attention to guests. Nevertheless, it is predicted that, by 2030, between thirty and forty-seven per cent of our jobs will become theirs. Elon Musk, who recently managed to lose his job as chairman of Tesla to a human, believes that a guaranteed universal income is the only solution to the inevitable mass unemployment. This will also mean more time to play with robots.

Back from my trip out West, I organized a slumber party for four sociable robots in a suite in a midtown Manhattan hotel. Unlike industrial or service robots, these creatures are meant to amuse, console, and fill in as surrogate therapists and pets. It seemed only right to invite some members of my species to the soirée as well. Several friends took me up on the offer, including a few children. None of them slept over, because they, unlike the feckless robots, had work and school the next day. The gathering was hosted by Kuri, a video-taking, photo-snapping, chirping bot on wheels, made by Mayfield Robotics ($899), that resembled a two-foot-high salt shaker with blinking hazard lights. Perhaps a better way to put that is that Mayfield footed the bill. Kuri’s handler, Jen Capasso, the senior communications manager at the company, introduced me to her charge. “Sweetheart, are you lost?” Capasso said tenderly to Kuri, who was supposed to be roaming around the suite, imprinting the layout of the space on its memory. Bumping into the coffee table, the robot came to a stop, refusing to move, despite Capasso’s urgings, both verbal and through an app on her phone. Kuri uses speech recognition and can respond to questions and commands by blinking its eyes, lighting up various parts of its body, or making expressive sounds, such as beeps, boops, giggles, yawns, and playing “Happy Birthday to You.” But it didn’t appear to be in the mood. “The more people there are in a room, the better he understands,” Capasso said apologetically, explaining that the echo-y suite was not optimal. Kuri trundled over to the window and stared out at the skyline. “He’s confused by the sun,” Capasso said, lowering the shades. Kuri sneezed, a stunt the Web site claims makes the robot “relatable to her human family.”

For most of the night, the adults sat around the island counter in the kitchen, drinking wine and dissing the robots. “They make me feel more lonely, because they are faking affection,” Iris Smyles, a novelist, said. “Not to take this to a lofty place, but do you remember Sartre’s essay about essence and existence? What’s distasteful about these creatures is that they seem to exist without a specific function except to love or be loved. If they made pasta, too, that would be an improvement.” In the living room, Olivia Osborne, age fourteen, loudly and repeatedly enunciated, “Ku-ri! Play your fa-vo-rite song!,” to no avail. “It’s like talking to someone who only mildly understands English,” her friend Fiona Brainerd, also fourteen, said, adding, “Something’s wrong if you spend more time trying to get a robot to do something than it takes to do that thing.” As Rodney Brooks, the co-founder of iRobot and the inventor of the Roomba’s software control system, recently wrote to me via e-mail, “The physical appearance of a robot makes a promise about its capabilities. If that promise is not met by the reality of what it can do, then there will be disappointment.”

Just then, CHiP ($199.99), a toy puppy the size of a kitten and as tough as a Cuisinart, barrelled into Kuri with enough force to maim someone not similarly made of strong plastic and metal. Only one of CHiP’s blue eyes was now lighting up, lending the dog the air of someone who’s been in a bar fight. Olivia rolled CHiP’s Bluetooth-enabled ball into the next room, and he successfully fetched it. “Hey, CHiP, do yoga,” Fiona ordered, and the obliging canine fell forward onto its snout, waggling its articulated hind legs aloft before deliberately collapsing onto the floor.

The girls shifted their attention to yet another creature who can do yoga (it must be stressful to be simultaneously animated and inanimate). Lynx, made by Ubtech, a biped the height of a bowling pin ($799.99), has articulated arms and legs that allow him to take a halting arthritic step or two, but then he tends to fall over like a felled tree. Lynx is Alexa-enabled, so if you preface your command with “Hello, Lynx,” it can do everything the Echo Dot ($49.99) can, but embellished with the gestures of a flight attendant demonstrating in-flight safety. The girls prompted their cyber-puppet to wave, give a hug, and do a dance: arms up, arms down, bend at the waist, now kick the leg. The moves were astonishingly deft but meaningless. “The problem is that there is a list of things Lynx can do, and once you’ve gone through them all, it gets boring,” Fiona said, perfunctorily patting Kuri on its head to elicit a purr.

By this time, Lynx was malfunctioning—he’d bowed his head and sunk into a deep lunge suggesting an N.F.L. protester—and Kuri was having a hard time parking into its charging dock, perhaps, Capasso theorized, because of a weak WiFi signal. “What’s this about robots are going to replace humans?” Fiona said. “They are clearly not.”

Then there was Paro ($6,400), a furry baby harp seal the size of a small duffelbag that autonomously wriggles and turns its head, swishes its flippers, and bats its eyelashes, responding to the sound of its name, being moved, the flick of its whiskers, and just because. I had the robot on a short-term loan from its maker, Takanori Shibata, an engineer from Tokyo whom I’d met with the day before in the lobby of the Hilton Times Square. Shibata was in the country for a series of meetings, including one with NASA, which he was trying to sell on the idea of including his stuffed animal on the mission to Mars, so that it can keep the astronauts company. “I wanted to develop a robot that enriched our lives psychologically, the way animals do,” he told me, opening a travel trunk that contained Paro and its charger—an electric baby pacifier that comes with a warning label that it is not for human use.

In the kitchen, the adults deemed Paro especially disturbing, and not only because its control switch is located under its tush. “It’s too needy,” Laurie Marvald, the manager of the music band AJR, said, noting that its constant motion felt like an attention-getting ploy to compel you to stroke it. “It would make me depressed and lonely by reminding me of the friend I don’t have,” Sarah Paley, a poet, said. “At least a bad date isn’t programmed to like you,” Smyles agreed.

In the next room, the seal was being doted on. “I like Paro the most, but sometimes I forget it’s a robot, and when I realize I’m having a reaction to it like it’s alive, it’s creepy,” Fiona said, almost perfectly describing the state for which the robotics professor Masahiro Mori, in 1970, coined the term “uncanny valley.” The seal was also the favorite of seven-year-old Gemma Aurelia Kuten Lent, who did her best to make sure nobody else got near her robo-crush. “Everyone quiet down!” she yelled to the other robots, who were bumping into walls, beeping poignantly, and generally behaving like dancers in a deranged “Nutcracker.” “My robot is getting agitated and can’t focus.” Gemma stroked Paro, who let out a whimper. Moments later, to Gemma’s relief, Olivia and Fiona departed for the night, leaving her alone to cradle her beloved.

In the United States and Europe, almost all of Paro’s buyers are institutions, which employ Paro as a soothing companion for the elderly, those with dementia, and children with disabilities. For this reason, the F.D.A. ruled it a Class II medical device. In Japan, half of Shibata’s customers are individuals who buy Paro as a surrogate pet for themselves or their family. According to Shunsuke Aoki, the forty-year-old C.E.O. of the Tokyo-based startup Yukai Engineering, the Japanese are more receptive than Americans to the concept of robots as friends and helpmates. “In Japan, we believe there are spirits in all objects, even man-made ones, and we feel harmony with them,” Aoki told me, referring to animism, a key component of the ancient religion of Shinto. Yukai Engineering is the maker of Qoobo ($149), a souped-up, purring pillow that is supposed to look like a cat in repose, with its round fluffy body in “husky gray” or “French brown,” and a tail that wags responsively, like a metronome gone berserk. Qoobo has no head, because, Aoki said, “this shape is designed for cuddling, and a head would get in the way.”

In contrast to Shintoism, Judeo-Christian theology suggests that, by begetting artificial life, you create false idols, who, inexorably, will decide to make your life miserable by destroying it. Take heed from the golem, Dr. Frankenstein’s monster, Mickey Mouse’s enchanted brooms, Dolores in “Westworld”—or, indeed, from try-hard Jibo. (Call off the militia: Kuri has been discontinued, and Jibo is not currently available.) Maybe more concerning than a robo-takeover is the effect that these machines might have on our human relationships. If Paro can provide comfort to our aging parents, will we visit them less frequently? If our children become accustomed to bossing around their mechanical menials without so much as a please or a thank you, will they turn into adults we can’t stand? If we accept a non-sentient being as a companion, will we ditch our friends, who, let’s face it, can be annoyingly needy compared with objects that can be unplugged when we don’t feel like chatting?

These concerns were on my mind when I spent one last day with Paro. On a bus going down Second Avenue, a tattooed young woman sitting opposite me seemed mesmerized by the seal, then asked a hard question: “Is that a doll or a toy?” In the Madison Square Park dog run, we went unnoticed until a Pomeranian caught sight of us and yapped so rambunctiously that the seal and I took refuge in a nearby Starbucks. At the table next to ours, a woman in her forties on furlough from her job as a pastry chef on a cruise ship looked up from her book to stare at Paro squirming in my lap. “I know it’s not real, but it’s having a real effect on me,” she said. She asked to hold it, caressing its cushiony paw. The cruise ship doesn’t allow pets. “It would definitely bring me comfort,” she said. Paro blinked, then turned its head toward her and gave her what seemed like a come-hither look. “It’s what I want in a pet—something that says ‘Love me, want me, feed me!’ ” she said. “It would bring me joy. False joy, but I’d appreciate it anyway.”

Paro and I made our way to the subway, where we sat next to an old, frail-looking man wearing a green parka. Paro’s head rested on the man’s leg, which seemed to enchant him. He fixed his eyes on the seal, tentatively petting it and softly calling it “Beauty.” If Paro belonged to him, the man told me in a Russian accent, “I would take care of it and it would take care of me.” What would he name it? “Arna,” he said. “The name of my late wife.” Before leaving the car, he leaned over and gently kissed Paro’s forehead. ♦

This post was originally published on this site

How to Start Freelancing Today as a Developer

Photo by Ali Yahya on Unsplash

I’m going to give you a guide for getting started today as a freelance developer. For anyone out there with web development skills who would like to start freelancing, today, this article is for you. It’s going to be a straight-forward post and hopefully, each point will make sense. Let’s begin!

It’s time to make a few lists

Grab a piece of paper, open Hastebin or new tab in your favorite editor so that you can write some things down.

What are your skills?

What skills do you currently have? What do you know? What are you good at?If you have written a few applications in Python then you should put that on the list. If you’ve worked with MySQL, MongoDB, or any other database, write that down.

You should list what you’re comfortable working with. If a potential client sent an email to you and needed a few things done to their WordPress website and you felt comfortable with the project, then you’d put that in the list. My list looked something like this when I completed this step.

Languages: Python, Lua, JavaScript, PHP
Databases: MySQL, MongoDB, DynamoDB, PostgreSQL
Web Frameworks: Flask, Laravel, SlimPHP, Express
Content Management Systems: WordPress
Game Development Frameworks: LÖVE

What services will you offer?

Now it’s time to list what services you’re going to offer, officially, to anyone and everyone that you think should care. It’s a good idea to pick from your interests AND your skillset. My list looked something like the following…

1. Full-Stack Web Development
2. Python/PHP/JavaScript/CSS/HTML Fixes
3. WordPress Installation, Plugin Development, Theme Development
4. Facebook/Reddit Ad Campaign Management
5. Anything they may need...

That’s a pretty straight-forward list. You can get more detailed, for sure. I wouldn’t go much less in detail than the example above. You want to be able to market yourself. To do that, you need to know what you know, what you can offer, who would care and then be able to follow through.

I could have offered a lot more services but there are some benefits to keeping this list small, initially…

It’s easier to get started when there’s less to do.

The more services you offer, the more detailed your portfolio will need to be. Also, it will give you too wide of a net if you list everything you know or can do. The idea here is to target the right audience. The “right audience” is the group of people that would pay you to write code. Be specific, be concise, do one thing well and then expand.

When you reach the end of this article and have followed the steps therein, you can continue breaking up the problem into smaller problems and solving each of them, well.

Portfolio

The final list to make in this part of the process is to list anything and everything you’ve ever made that is worth a damn. List everything that you’ve worked on. Where have you worked? What projects did they have you on? Anything you can link to someone or explain (and also have a reference for the project) is what needs to go here. You can also list any other accomplishments such as certifications, degrees, anything and everything that makes you seem qualified for the services you’re offering.

What did my list look like?

Developer on Encirca project at DuPont Pioneer Hi-Bred
Contributor to open source/commercial game with Binary Cocoa titled HDF
Developer hired by Binary Cocoa on several WordPress and Laravel projects for their clients
https://github.com/jessehorne
https://linkedin.com/in/jesseleehorne

A couple of the things above have been shortened to not take up too much space in this article but you get the point. When I chose to take freelancing seriously, I didn’t have too much but I had enough to work with. I listed anything and everything, related to my offered services that could be put on my portfolio site. Which brings me to the next part of the process.

Now it’s time to use these lists you’ve created

Find a Domain Name/Email/Hosting solution for your personal site

I went to Namecheap recently when redesigning my personal site. I chose them over my usual choice of GoDaddy because they’re cheaper, sometimes. I wanted to get a personalized domain name instead of hosting for free on GitHub (which is a free alternative if you’re interested). I only recently was able to even afford this. The past few months have been stable but the journey to stability has been hard.

To get an SSL certificate, I needed an email attached to the domain. So I bought two; “[email protected]” and “[email protected]”. The first would be used to acquire the SSL certificate and the second would be the email I put on the site for interested folks to contact me, personally.

Design your site

I chose to write a mobile-responsive site in vanilla CSS/JavaScript. Usually, I would have downloaded the latest version of Bootstrap but I wanted this to be a learning experience as much of an attempt to show off my current skills.

After browsing Awwwards a little bit for inspiration, I opened my editor and began typing. After finishing version 1, I posted on HackerNews and Reddit, asking for feedback. A few revisions and hours later… This is what I had.

It depends on what sorts of gigs you’re trying to get but I strongly urge you to spend some time making your site stand out. Make it memorable. Just don’t forget to make it give the information it needs to give. If you were looking to outsource some development and stumbled across your page, what would make you want to hire yourself? :)

As a side note, a good exercise would be to ask that same question to people that may answer it better than you ever could. Find people that are in positions to hire or that need what you’re offering. Make another list (I love lists) of everyone that fits into your target audience. Before you ask them what they need, try asking them for advice or feedback. Try to build a relationship with those people. Or, simply ask them to check out your site and let everything else flow naturally.

Clean up your social!

Immediately after my site was set up, I went to Facebook and changed some privacy settings. I’ve been known to post music as I listen to it, if it’s dope. Some of the music I listen to isn’t professional nor work-appropriate. I wanted my social presence to align with my professional appearance and my goals.

I updated my GitHub and LinkedIn profiles a bit and put my fancy new website on them so that would be the first thing people clicked on.

Let anyone who is anyone know what you’re up to

Luckily for me, I’ve had the same client for a while and it turned out to be a steady stream of work that helped me escape the feast or famine cycle. Either way, I had a few emails of people and LinkedIn contacts that I wanted to reach out to. I let them all know what I had been working on, I updated them on my financial situation and asked if they or anyone they knew may be looking to outsource some work. This has worked well for me in the past but my recent attempt hasn’t been so valuable.

Join a few sites

I also registered on the following sites, when I decided to take freelancing more seriously.

I haven’t had much success at all through job sites but that’s because I haven’t tried. Out of all the other sites that I’ve seen, however, the first three are the ones I would focus on if I needed to find more clients or another gig, quickly.

If you want to focus on code and let someone else find the clients for you, check out Toptal. Check out their process. Basically, you will need a webcam. You will end up doing a text-based pre-interview and at some point be interviewed via webcam. I do not know if their process has changed since the last time I looked into it but it’s worth noting.

As a side note, I took to Craigslist and paid $5 for an ad or two in the “services offered” section in the Des Moines area (where I am located now). You may want to give that a try, too. I haven’t had any luck, yet, but I imagine it could be beneficial to someone out there. My ad probably is just trash. I will look that over soon.

What’s next?

There’s a lot you could do to make your portfolio shine. I know of several strategies that have worked for me when finding new clients as well as taking advantage of past clients. There are a million things you could do or be doing. For now, it’s up to you to figure that out.

I plan on writing more articles to elaborate and expand upon this process. Stay tuned! Don’t forget to show some ❤!

Suggested Reading

This post was originally published on this site

It seems every person on the planet is shopping on Black Friday. But to make it to the insanely crowded stores, you first have to find parking. This could be more stressful than the shopping itself. Wouldn’t it be great if someone could get there early and hold a spot for you?

Robot Parking MyPark - YellRobotcredit: MyPark

Miami based company MyPark is sending robots to hold your spot until you get there. No more driving around, fighting with other cars, or parking in another zip code. With MyPark you can select where and when you want to park. The best part is everything can be done via your phone.

Customers sign up through the MyPark app. They then pick their preferred spot and what time they plan on getting to the mall. Once they arrive at the space, they simply tap “Let Me In” on the app and the robot allows access. It’s that easy.

Of course, the best things in life aren’t free. Rates are about $3 for the first two hours with $3 per hour after that. If you are someone who really plans out shopping trips, you can reserve parking spots up to six months in advance.

[embedded content]

MyPark is Already in Major Malls

Currently, the service is in 11 locations throughout Florida and 10 in Minnesota, New York, New Jersey, Georgia, and Puerto Rico. Some of the major malls include Mall of America in Minnesota, Mall of Georgia, and NY tourist hot spot Woodbury Commons. The robots are also working in a few private locations in Florida. The full list can be found here.

Beyond malls, the company is also looking to deploy their robots to businesses, airports, stadiums and parking garages. Along with holding spots for customers, the company feels the robots could be useful in helping to enforce employee parking. While only currently in the US, MyPark is looking to expand to places in Europe, Latin America, and the Middle East.


Check out our articles on robots in grocery stores and ones that work in cafes.

This post was originally published on this site
  • On March 18 one of Uber’s self-driving cars killed a pedestrian, the first pedestrian fatality involving a self-driving car.
  • As Uber prepares to return its cars to the roads, Business Insider spoke to current and former employees and viewed internal documents.
  • These employees and documents described vast dysfunctionality in Uber’s Advanced Technologies Group, with rampant infighting and pressure to catch up to competitors, issues that these employees say continue to this day.
  • Sources say engineers were pressured to “tune” the self-driving car for a smoother ride in preparation of a big, planned year-end demonstration of their progress. But that meant not allowing the car to respond to everything it saw, real or not.
  • “This could have killed a toddler … That’s the accident that didn’t happen but could have,” one employee told Business Insider.

At 9:58 p.m. on a nearly moonless Sunday night, 49-year-old Elaine Herzberg stepped into a busy section of Mill Road in Tempe, Arizona, pushing her pink bicycle. A few seconds later, one of Uber’s self-driving Volvo SUVs ran into her at 39 MPH and killed her.

Inside the car, Rafaela Vasquez was working alone, as all of Uber’s safety drivers did at the time. Her job was to watch the car drive itself, taking control if it had issues. She kept taking her eyes off the road, but not to enter data into an iPad app as her job required. She was streaming an episode of “The Voice” over Hulu on her phone. She looked up again just as the car hit Herzberg, grabbed the wheel and gasped.

These are the details of the March 18 incident according to the preliminary National Safety Transportation Board (NSTB) report, police reports and a video of the incident released by police.

The incident shocked people inside Uber’s Advanced Technologies Group, the company’s 1,100-person self-driving unit. Employees shared their horror in chat rooms and in the halls, several employees told Business Insider.

Self-driving cars are promoted as being safer than humans, able to see and react with the speed of a computer. But one of their cars had been involved in the first-ever self-driving fatality of a pedestrian.

When employees learned that Herzberg was a jaywalking homeless woman and that her blood tested positive for meth and pot, many seized on those details to explain away the tragedy, several employees told us. (Uber has since settled a lawsuit from her daughter and husband.)

National Transportation Safety Board (NTSB) investigators examine a self-driving Uber vehicle involved in a fatal accident in Tempe, Arizona, U.S., March 20, 2018. National Transportation Safety Board/Handout via REUTERS

When employees discovered Vasquez was watching Hulu, and was a convicted felon before Uber hired her, they vilified her.

“People were blaming everything on her,” one employee said.

But insiders tell us that Vasquez and Herzberg were not the only factors in this death. There was a third party that deserves some blame, they say: the car itself, and a laundry list of questionable decisions made by the people who built it.

Uber’s car had actually spotted Herzberg six seconds before it hit her, and a second before impact, it knew it needed to brake hard, the NSTB reported.

But it didn’t.

It couldn’t.

Its creators had forbidden the car from slamming on the brakes in an emergency maneuver, even if it detected “a squishy thing” — Uber’s term for a human or an animal, sources told Business Insider. And the NSTB report said that Uber had deliberately disabled self-driving braking.

The car’s creators had also disabled the Volvo’s own emergency braking factory settings, the report found and insiders confirmed to Business Insider.

(Read more: Uber lost nearly $1 billion last quarter as the ride-hailing giant’s growth slows)

According to emails seen by Business Insider, they had even tinkered with the car’s ability to swerve.

Much has been written about the death of Elaine Herzberg, most of it focused on the failings of the driver. But, until now, not much has been revealed about why engineers and senior leaders turned off the car’s ability to stop itself.

Insiders told us that it was a result of chaos inside the organization, and may have been motivated, at least in part, to please the boss.

Business Insider spoke to seven current and former employees of Uber’s self-driving car unit employed during the time of the accident and in the months succeeding it. We viewed internal emails, meeting notes and other documents relating to the self-driving car program and the incident.

We learned that these insiders allege that, despite many warnings about the car’s safety, the senior leadership team had a pressing concern in months before the accident: impressing Uber’s new CEO Dara Khosrowshahi with a demo ride that gave him a pleasant “ride experience.”

These employees and documents paint a picture of:

  • Leadership that feared Khosrowshahi was contemplating canceling the project, ending their very high paying jobs, so they wanted to show him progress.
  • Engineers allegedly making dangerous trade-offs with safety as they were told to create a smooth riding experience.
  • Incentives and priorities that pressured teams to move fast, claim progress, and please a new CEO.
  • A series of warning signals that were either ignored or not taken seriously enough.
  • Vast dysfunctionality in Uber’s Advanced Technologies Group, with rampant infighting so that no one seemed to know what others were doing.

After the accident on March 18, Uber grounded the fleet. Although the NSTB’s final report has not yet been released, (it’s expected in April, sources tell us), the company is already making plans to put its cars back on the public roads, according to documents viewed by Business Insider. Uber is trying to catch up to competitors like Google’s Waymo and GM, who never halted their road tests.

‘Could have killed a toddler’

To some Uber insiders, Herzberg’s death was the tragic but unsurprising consequence of a self-driving car that should not have been driving on the open road at night and perhaps not at all.

Uber’s self-driving Volvo SUV. Gene J. Puskar / AP

Uber’s Advanced Technologies Group, the unit building self-driving vehicles, is currently spending $600 million a year, sources familiar with the matter tell Business Insider, although others have said the budget has been closer to $1 billion a year. And it remains woefully behind the self-driving car market leaders in every measurable way although Uber tells us that the company has “every confidence in the work” the team is doing to get back on track.

At the time of the accident, engineers knew the car’s self-driving software was immature and having trouble recognizing or predicting the paths of a wide variety of objects, including pedestrians, in various circumstances, according to all the employees we talked to.

For instance, the car was poorly equipped for “near-range sensing” so it wasn’t always detecting objects that were within a couple of meters of it, two people confirmed to Business Insider.

“This could have killed a toddler in a parking lot. That was our scenario. That’s the accident that didn’t happen, but could have,” one software developer said.

Every week, software team leaders were briefed on hundreds of problems, ranging from minor to serious, people told us, and the issues weren’t always easy to fix.

For example, the tree branches.

For weeks on end, during a regular “triage” meeting where issues were prioritized by vice president of software Jon Thomason, tree branches kept coming up, one former engineer told us.

Tree branches create shadows in the road that the car sometimes thought were physical obstacles, multiple people told us.

Jon Thomason, the vice president of software at Uber’s ATG unit.Jon Thomason/LinkedInUber’s software “would classify them as objects that are actually moving and the cars would do something stupid, like stop or call for remote assistance,” one engineer explained. “Or the software might crash and get out of autonomy mode. This was a common issue that we were trying to fix.”

Thomason grew irate at one of these meetings, another engineer recalls, and demanded the problem be fixed. “This is unacceptable! We are above this! We shouldn’t be getting stuck on tree branches, so go figure it out,” Thomason said.

An Uber spokesperson denies that the car stops for tree branch shadows. This spokesperson said the car stops for actual tree branches in the road.

Meanwhile, another employee also said that piles of leaves could confuse the car. A third employee told us of other efforts to teach the car to recognize foliage.

Employees also said the car was not always able to predict the path of a pedestrian. And according to an email reviewed by Business Insider, the car’s software didn’t always know what to do when something partially blocked the lane it was driving in.

On top of all of this, a number of engineers at Uber said they believed the cars were not being thoroughly tested in safer settings. They wanted better simulation software, used more frequently.

Business Insider/Corey Protin

The company started to address that concern before the accident when it hired a respected simulation engineer in February. Recently, Uber has publicly vowed to do more simulation testing when it is allowed to send its cars back on the open road again.

But before Herzberg’s accident, “We just didn’t invest in it. We had sh–. Our off-line testing, now called simulation, was almost non-existent, utter garbage,” as one described it.

Besides simulation, another way to test a self-driving car is on a track.

But employees we spoke to described Uber’s track testing efforts as disorganized with each project team doing it their own way and no one overseeing testing as a whole.

This kind of holistic oversight is another area that Uber says it is currently addressing.

Yet, even now, months after the tragedy, these employees say that rigorous, holistic safety testing remains weak. They say that the safety team has been mostly working on a “taxonomy” — in other words, a list of safety-related terms to use — and not on making sure the car performs reliably in every setting. Uber tells us that the safety team has been working on both the taxonomy and the tests itself.

Dara’s ride

As employees worked, they were acutely aware of division leadership’s plans to host a very important passenger: Uber’s new CEO, Dara Khosrowshahi.

Khosrowshahi had taken over as Uber CEO in the summer of 2017, following a tumultuous year in which the company was battered by a string of scandals involving everything from sexual harassment allegations to reports of unsavory business practices. The self-driving car group wasn’t immune, with Anthony Levandowski, its leader and star engineer, ousted in April 2017, amid accusations of IP theft.

The unit’s current leader, Eric Meyhofer, took Levadowski’s place just five months before Khosrowshahi was hired.

Despite Uber’s massive investment in self-driving cars, its program was considered to be far behind the competition. News reports speculated that Khosrowshahi should just shut it down.

Uber CEO Dara Khosrowshahi Thomson Reuters

None of this was lost on Meyhofer and the senior team, who wanted to impress their new CEO with a show of progress, sources and documents said.

Plans were made to take Khosrowshahi on a demo ride sometime around April and to have a big year-end public demonstration. ATG needed to “sizzle,” Meyhofer liked to say, people told us. Internally, people began talking about “Dara’s ride” and “Dara’s run.”

The stakes were high. If ATG died it could end the leadership team’s high-paying jobs. Senior engineers were making over $400,000 and directors made in the $1 million range between salary, bonus and stock options, multiple employees said.

Leadership also had their reputation at stake. They did not want to be forever be known as the ones who led Uber’s much-publicized project to its death, people close to Meyhofer explained.

Internally, unit leaders geared up to pull off the “sizzle.”

‘Bad experiences’

As the world’s largest ride-hailing company, Uber understood the need to give customers a good experience. If passengers were going to accept self-driving cars, the ride could not be the jarring experience that had made a BuzzFeed reporter car sick during a demonstration.

So, in November, a month after Khosrowshahi became their new boss, Eric Hanson, a senior member of the product team sent out a “product requirement document” that spelled out a new goal for ATG, according to an email viewed by Business Insider. (Hanson has since become the director of the product group.)

Business Insider/Corey Protin The document asked engineers to think of “rider experience metrics” and prescribed only one “bad experience” per ride for the big, year-end demonstration.

Given how immature the car’s autonomous software was at the time, “that’s an awfully high bar to meet,” one software developer said.

Some engineers who had been focused on fixing safety-related issues were aghast. Engineers can “tune” a self-driving car to drive smoother easily enough, but with immature software, that meant not allowing the car to respond to everything it saw, real or not, sources explained. And that could be risky.

“If you put a person inside the vehicle and the chances of that person dying is 12%, you should not be discussing anything about user experience,” one frustrated engineer hypothesized. “The priority should not be about a user experience but about safety.”

Two days after the product team distributed the document discussing “rider experience metrics” and limiting “bad experiences,” to one per ride, another email went out. This one was from several ATG engineers. It said they were turning off the car’s ability to make emergency decisions on its own like slamming on the brakes or swerving hard.

Their rationale was safety.

“These emergency actions have real-world risk if the VO [“vehicle operator” or safety driver] does not take over in time and other drivers are not attentive, so it is better to suppress plans with emergency actions in online operation,” the email read.

In other words, such quick moves can startle other drivers on the road and if there was a real threat, the safety driver would have already have taken over, they reasoned. So they resolved to limit the car’s actions and rely wholly on the safety driver’s alertness.

The sub-context was clear: the car’s software wasn’t good enough to make emergency decisions. And, one employee pointed out to us, by restricting the car to gentler responses, it might also produce a smoother ride.

A few weeks later, they gave the car back more of its ability to swerve but did not return its ability to brake hard.

Shayanne Gal/Skye Gould/Business Insider

Final warning

The final warning sign came just a couple of days before the tragedy.

One of the lead safety drivers sent an email to Meyhofer laying out a long list of grievances about the mismanagement and working conditions of the safety driver program.

“The drivers felt they were not being utilized well. That they were being asked to drive around in circles but that their feedback was not changing anything,” said one former engineer of Uber’s self-driving car unit who was familiar with the driver program.

Business Insider/Corey Protin Drivers complained about long hours and not enough communication on what they should be testing and watching for. But the big complaint was the decision a few months earlier to start using one driver instead of two. That choice instantly gave ATG access to more drivers so the company could log more mileage without having to hire double the drivers.

The second driver used to be responsible for logging the car’s issues into an iPad app and dealing with the car’s requests to identify objects on the road.

Eric Risberg / AP Now one person had to do everything, employees told us.

This not only eliminated the safety/redundancy of having two drivers, it required the active driver to do the logging and tagging, not keeping their eyes on the road, which some inside the company believed was not safe.

It was like distracted driving, “like watching their cell phone 10-15% of the time,” said one software engineer.

If Meyhofer took the angry email from that safety driver seriously — and multiple people told us he’d been reacting with frustration to people he viewed as naysayers — he didn’t have a chance to act on it before the tragedy that benched the cars. Uber now says the self-driving car unit plans to return to the two driver system when it sends its cars on the road again.

A ‘toxic culture’

Some employees believe that ATG’s leaders were pushing for a “ride experience” to make Khosrowshahi believe the car was farther along than it was during that planned demo ride.

But others say mistakes were less conscious than that.

One engineer who worked closely with Meyhofer said the real problem was that under his leadership there was “poor communication, with a bunch of teams duplicating effort,” adding, “One group doesn’t know what the other is doing.”

This engineer said one team would not know the other had disabled a feature, or that a feature didn’t pass on the track test, or that drivers were saying a car performed badly.

“They only know their piece. You get this domino effect. All these things create an unsafe system,” this person said.

Eric Meyhofer, head of Uber’s Advanced Technologies Group.Uber

Everyone we talked to also described the unit as a “toxic culture” under Meyhofer.

They talked of impossible workloads, backstabbing teammates and poor management.

According to documents viewed by Business Insider, leadership was aware of this reputation, with Thomason confessing to other leaders in a September meeting, “We repeatedly hear that ATG is not a fun place to work” and admitted such feedback was “baffling” to him.

While some people disliked Meyhofer and thought that he could be insecure to the point of vindictiveness if he was challenged, others described him as a nice guy with good intentions who was in over his head.

“He is a hardware guy. He runs a tight hardware ship,” one engineer and former employee who worked closely with him told us. But the brains of a self-driving car is software, this person said.

This person says that meant Meyhofer lacked “the understanding and know-how of the software space.”

“Imagine a leader that can’t weigh two options and decide the best course of action,” described this engineer.

Everyone we spoke with agreed that part of the problem is that Uber’s self-driving car unit was staffed by teams in two very different locations with two very different engineering cultures.

There was a team in Pittsburgh, anchored by folks from Carnegie Mellon’s National Robotics Engineering Center (of which Meyhofer was an alum) and there were San Francisco-based teams.

The San Francisco people complained that the NREC folks were a bunch of academics with no real-world, product-building experience, who retained their high paying jobs by virtue of being Meyhofer’s cronies. The NREC team saw the San Francisco engineers as whiny and ungrateful, people described.

On top of the infighting, there was a bonus structure that rewarded some employees for speedily hitting milestones, careful testing or not, multiple sources described.

“At ATG, the attitude is I will do whatever it takes and I will get this huge bonus,” one former engineer said. “I swear that everything that drives bad behaviors was the bonus structure.”

In its safety report, Uber says some of the ways ATG was measuring the progress of its program before the accident created “perverse incentives.”

Specifically, ATG, like everyone in the self-driving car industry, believed that the more miles a car drove itself without help from a human, the smarter it was. But the whole industry now realized this is an overly simplistic way to measure how well a car drives. If management is too focused on that metric, employees may feel pressured not to take control of the car, even when they should.

Uber has since vowed to find other ways to measure improvement and tells us that milestone-based bonuses were limited to just a few people and have recently been eliminated.

Internally, some remain frustrated with the self-driving unit.

Uber’s ATG offices in Pittsburgh. Business Insider/Danielle Muoio

“Within ATG, we refused to take responsibility. They blamed it on the homeless lady, the Latina with a criminal record driving the car, even though we all knew Perception was broken,” one software developer said, referring to the software called “Perception” that allows the car to interpret the data its sensors collect. “But our car hit a person. No one inside ATG said, ‘We did something wrong and we should change our behavior.'”

The employees we talked to note that most of the same leadership team remains in place under Meyhofer, and some of them, like Hanson, have even been promoted. They allege that the unit’s underlying culture hasn’t really changed.

An Uber spokesperson says the company has reviewed its safety procedures, made many changes already and is promising to make many more. It described them all in a series of documents the company published this month, as it ramps up to hit the road again soon.

Uber’s has been using Volvo SUVs to test its self-driving technology. REUTERS/Natalie Behring

“Right now the entire team is focused on safely and responsibly returning to the road in self-driving mode,” the spokesperson told us. “We have every confidence in the work they are doing to get us there. We recently took an important step forward with the release of our Voluntary Safety Self-Assessment. Our team remains committed to implementing key safety improvements, and we intend to resume on-the-road self-driving testing only when these improvements have been implemented and we have received authorization from the Pennsylvania Department of Transportation.”

The Uber CEO’s big ride-along never happened because of the March accident — at least not on any public roads. But that may change.

Meyhofer and most of his closest lieutenants remain in charge of ATG and want get their self-driving cars on the road again as soon as possible, before the end of the year, maybe even later this month, employees say. In a September meeting of Meyhofer’s senior leadership, the team was told: “We need to demonstrate something very publicly by 2019,” according to documents seen by Business Insider.

This post was originally published on this site

Rupert Murdoch’s recent criticisms of Facebook reflect a much more complex relationship with Mark Zuckerberg behind the scenes, according to a lengthy new Facebook profile in Wired.

News Corp magnate Murdoch openly criticized Facebook’s relationships with publishers last month, saying in a highly publicized statement that the social media giant should pay trusted publishers.

But Murdoch, who had owned once-dominant social media site MySpace, covertly waged war against Facebook over News Feed beginning in 2016, Wired reported, citing unnamed sources.

Murdoch hosted Zuckerberg at his Sun Valley, Idaho, villa and expressed discontent with Facebook’s News Feed algorithm and its handling of news.

He requested Facebook consult publishing partners and be more generous sharing digital ad revenue, or he vowed, News Corp executives would take their dislike of Facebook public. He also hinted that News Corp lobbyists would take a more aggressive stand against Facebook with U.S. regulators, as the company had done against Google in Europe.

News Corp denied it would mobilize its journalists against Facebook, although unnamed Facebook executives said they believed at the time that would happen.

It’s not a new concern. Facebook has long taken heat for its in-app Instant Articles, which previously did not enforce publishers’ paywalls, and which many media organizations discovered generated less ad revenue than articles on their own sites.

The 2016 sit-down between Murdoch and Zuckerberg was apparently the culmination of years of tension.

The two squared off as early as 2007, when Connecticut’s then- attorney general, Richard Blumenthal, opened an investigation into Facebook’s protection of young users based on letters from concerned parents that referenced predatory accounts.

Facebook at the time believed those accounts were created by lawyers for Murdoch, according to an anonymous former Facebook executive. At the time, News Corp owned the site’s biggest competitor, MySpace.

“When it comes to Facebook, Murdoch has been playing every angle he can for a long time,” the executive told Wired.

The relationship could be softening, though. Wired reports Zuckerberg toasted to Murdoch late last year at a dinner of Facebook and News Corp executives, speaking highly of his accomplishments and his tennis game.

Read the full Wired report here.

This post was originally published on this site

This episode of the MWI Podcast is a very special one. In it, you’ll  get to hear the very first episode of the newest podcast series we’re launching at MWI.

Over the past couple years, we have deliberately spent a lot of time and energy on the particular set of military problems posed by cities. We know that we can’t escape major demographic, geopolitical, and other trends that combine to make it extremely likely that military forces will increasingly have to operate in urban areas in the future. But those urban areas pose a host of challenges that, simply put, we aren’t ready for.

That’s why we launched the Urban Warfare Project. It represents our effort to wrestle with these challenges. And this—the Urban Warfare Project Podcast—is our latest initiative. It will be hosted by John Spencer, who serves as MWI’s Chair of Urban Warfare Studies, and whose work strives to help guide the Army, our sister services, and the forces of our allies and partners to a place where we’re ready for the problems cities will inevitably pose.

In this first episode, John talks to David Kilcullen, who—among his many other notable contributions to the way we think about war and the contemporary battlespace—has sought to conceptualize the major global patterns that drive the notion that cities, whether we like it or not, will be a dominant feature of the coming landscape of conflict and security.

Listen to the full episode below, and don’t forget to subscribe to the MWI Podcast on Apple Podcasts, Stitcher, or your favorite podcast app. You’ll also be able to subscribe to the Urban Warfare Project Podcast very soon, so be sure to follow MWI on Twitter, Facebook, and LinkedIn so you can be among the first to subscribe to this exciting new podcast!

Image credit: Staff Sgt. Alex Manne, US Army

This post was originally published on this site
About Us

DuckDuckGo is an Internet privacy company that empowers you to seamlessly take control of your personal information online, without any tradeoffs. With our roots as the search engine that doesn’t track you, we’ve expanded what we do to protect you no matter where the Internet takes you.

Learn more about our story

This post was originally published on this site

Often times developers go about writing bash scripts the same as writing code in a higher-level language. This is a big mistake as higher-level languages offer safeguards that are not present in bash scripts by default. For example, a Ruby script will throw an error when trying to read from an uninitialized variable, whereas a bash script won’t. In this article, we’ll look at how we can improve on this.

The bash shell comes with several builtin commands for modifying the behavior of the shell itself. We are particularly interested in the set builtin, as this command has several options that will help us write safer scripts. I hope to convince you that it’s a really good idea to add set -euxo pipefail to the beginning of all your future bash scripts.

set -e

The -e option will cause a bash script to exit immediately when a command fails. This is generally a vast improvement upon the default behavior where the script just ignores the failing command and continues with the next line. This option is also smart enough to not react on failing commands that are part of conditional statements. Moreover, you can append a command with || true for those rare cases where you don’t want a failing command to trigger an immediate exit.

Before

#!/bin/bash

# 'foo' is a non-existing command
foo
echo "bar"

# output
# ------
# line 4: foo: command not found
# bar
#
# Note how the script didn't exit when the foo command could not be found.
# Instead it continued on and echoed 'bar'.

After

#!/bin/bash
set -e

# 'foo' is a non-existing command
foo
echo "bar"

# output
# ------
# line 5: foo: command not found
#
# This time around the script exited immediately when the foo command wasn't found.
# Such behavior is much more in line with that of higher-level languages.

Any command returning a non-zero exit code will cause an immediate exit

#!/bin/bash
set -e

# 'ls' is an existing command, but giving it a nonsensical param will cause
# it to exit with exit code 1
$(ls foobar)
echo "bar"

# output
# ------
# ls: foobar: No such file or directory
#
# I'm putting this in here to illustrate that it's not just non-existing commands
# that will cause an immediate exit.

Preventing an immediate exit

#!/bin/bash
set -e

foo || true
$(ls foobar) || true
echo "bar"

# output
# ------
# line 4: foo: command not found
# ls: foobar: No such file or directory
# bar
#
# Sometimes we want to ensure that, even when 'set -e' is used, the failure of
# a particular command does not cause an immediate exit. We can use '|| true' for this.

Failing commands in a conditional statement will not cause an immediate exit

#!/bin/bash
set -e

# we make 'ls' exit with exit code 1 by giving it a nonsensical param
if ls foobar; then
  echo "foo"
else
  echo "bar"
fi

# output
# ------
# ls: foobar: No such file or directory
# bar
#
# Note that 'ls foobar' did not cause an immediate exit despite exiting with
# exit code 1. This is because the command was evaluated as part of a
# conditional statement.

That’s all for set -e. However, set -e by itself is far from enough. We can further improve upon the behavior created by set -e by combining it with set -o pipefail. Let’s have a look at that next.

set -o pipefail

The bash shell normally only looks at the exit code of the last command of a pipeline. This behavior is not ideal as it causes the -e option to only be able to act on the exit code of a pipeline’s last command. This is where -o pipefail comes in. This particular option sets the exit code of a pipeline to that of the rightmost command to exit with a non-zero status, or to zero if all commands of the pipeline exit successfully.

Before

#!/bin/bash
set -e

# 'foo' is a non-existing command
foo | echo "a"
echo "bar"

# output
# ------
# a
# line 5: foo: command not found
# bar
#
# Note how the non-existing foo command does not cause an immediate exit, as
# it's non-zero exit code is ignored by piping it with '| echo "a"'.

After

#!/bin/bash
set -eo pipefail

# 'foo' is a non-existing command
foo | echo "a"
echo "bar"

# output
# ------
# a
# line 5: foo: command not found
#
# This time around the non-existing foo command causes an immediate exit, as
# '-o pipefail' will prevent piping from causing non-zero exit codes to be ignored.

This section hopefully made it clear that -o pipefail provides an important improvement upon just using -e by itself. However, as we shall see in the next section, we can still do more to make our scripts behave like higher-level languages.

set -u

This option causes the bash shell to treat unset variables as an error and exit immediately. Unset variables are a common cause of bugs in shell scripts, so having unset variables cause an immediate exit is often highly desirable behavior.

Before

#!/bin/bash
set -eo pipefail

echo $a
echo "bar"

# output
# ------
#
# bar
#
# The default behavior will not cause unset variables to trigger an immediate exit.
# In this particular example, echoing the non-existing $a variable will just cause
# an empty line to be printed.

After

#!/bin/bash
set -euo pipefail

echo "$a"
echo "bar"

# output
# ------
# line 5: a: unbound variable
#
# Notice how 'bar' no longer gets printed. We can clearly see that '-u' did indeed
# cause an immediate exit upon encountering an unset variable.

Dealing with ${a:-b} variable assignments

Sometimes you’ll want to use a ${a:-b} variable assignment to ensure a variable is assigned a default value of b when a is either empty or undefined. The -u option is smart enough to not cause an immediate exit in such a scenario.

#!/bin/bash
set -euo pipefail

DEFAULT=5
RESULT=${VAR:-$DEFAULT}
echo "$RESULT"

# output
# ------
# 5
#
# Even though VAR was not defined, the '-u' option realizes there's no need to cause
# an immediate exit in this scenario as a default value has been provided.

Using conditional statements that check if variables are set

Sometimes you want your script to not immediately exit when an unset variable is encountered. A common example is checking for a variable’s existence inside an if statement.

#!/bin/bash
set -euo pipefail

if [ -z "${MY_VAR:-}" ]; then
  echo "MY_VAR was not set"
fi

# output
# ------
# MY_VAR was not set
#
# In this scenario we don't want our program to exit when the unset MY_VAR variable
# is evaluated. We can prevent such an exit by using the same syntax as we did in the
# previous example, but this time around we specify no default value.

This section has brought us a lot closer to making our bash shell behave like higher-level languages. While -euo pipefail is great for the early detection of all kinds of problems, sometimes it won’t be enough. This is why in the next section we’ll look at an option that will help us figure out those really tricky bugs that you encounter every once in a while.

set -x

The -x option causes bash to print each command before executing it. This can be a great help when trying to debug a bash script failure. Note that arguments get expanded before a command gets printed, which will cause our logs to contain the actual argument values that were present at the time of execution!

#!/bin/bash
set -euxo pipefail

a=5
echo $a
echo "bar"

# output
# ------
# + a=5
# + echo 5
# 5
# + echo bar
# bar

That’s it for the -x option. It’s pretty straightforward, but can be a great help for debugging. Next up, we’ll look at an option I had never heard of before that was suggested by a reader of this blog.

Reader suggestion: set -E

Traps are pieces of code that fire when a bash script catches certain signals. Aside from the usual signals (e.g. SIGINT, SIGTERM, …), traps can also be used to catch special bash signals like EXIT, DEBUG, RETURN, and ERR. However, reader Kevin Gibbs pointed out that using -e without -E will cause an ERR trap to not fire in certain scenarios.

Before

#!/bin/bash
set -euo pipefail

trap "echo ERR trap fired!" ERR

myfunc()
{
  # 'foo' is a non-existing command
  foo
}

myfunc
echo "bar"

# output
# ------
# line 9: foo: command not found
#
# Notice that while '-e' did indeed cause an immediate exit upon trying to execute
# the non-existing foo command, it did not case the ERR trap to be fired.

After

#!/bin/bash
set -Eeuo pipefail

trap "echo ERR trap fired!" ERR

myfunc()
{
  # 'foo' is a non-existing command
  foo
}

myfunc
echo "bar"

# output
# ------
# line 9: foo: command not found
# ERR trap fired!
#
# Not only do we still have an immediate exit, we can also clearly see that the
# ERR trap was actually fired now.

The documentation states that -E needs to be set if we want the ERR trap to be inherited by shell functions, command substitutions, and commands that are executed in a subshell environment. The ERR trap is normally not inherited in such cases.

Conclusion

I hope this post showed you why using set -euxo pipefail (or set -Eeuxo pipefail) is such a good idea. If you have any other options you want to suggest, then please let me know and I’ll be happy to add them to this list.