This post was originally published on this site

Often times developers go about writing bash scripts the same as writing code in a higher-level language. This is a big mistake as higher-level languages offer safeguards that are not present in bash scripts by default. For example, a Ruby script will throw an error when trying to read from an uninitialized variable, whereas a bash script won’t. In this article, we’ll look at how we can improve on this.

The bash shell comes with several builtin commands for modifying the behavior of the shell itself. We are particularly interested in the set builtin, as this command has several options that will help us write safer scripts. I hope to convince you that it’s a really good idea to add set -euxo pipefail to the beginning of all your future bash scripts.

set -e

The -e option will cause a bash script to exit immediately when a command fails. This is generally a vast improvement upon the default behavior where the script just ignores the failing command and continues with the next line. This option is also smart enough to not react on failing commands that are part of conditional statements. Moreover, you can append a command with || true for those rare cases where you don’t want a failing command to trigger an immediate exit.

Before

#!/bin/bash

# 'foo' is a non-existing command
foo
echo "bar"

# output
# ------
# line 4: foo: command not found
# bar
#
# Note how the script didn't exit when the foo command could not be found.
# Instead it continued on and echoed 'bar'.

After

#!/bin/bash
set -e

# 'foo' is a non-existing command
foo
echo "bar"

# output
# ------
# line 5: foo: command not found
#
# This time around the script exited immediately when the foo command wasn't found.
# Such behavior is much more in line with that of higher-level languages.

Any command returning a non-zero exit code will cause an immediate exit

#!/bin/bash
set -e

# 'ls' is an existing command, but giving it a nonsensical param will cause
# it to exit with exit code 1
$(ls foobar)
echo "bar"

# output
# ------
# ls: foobar: No such file or directory
#
# I'm putting this in here to illustrate that it's not just non-existing commands
# that will cause an immediate exit.

Preventing an immediate exit

#!/bin/bash
set -e

foo || true
$(ls foobar) || true
echo "bar"

# output
# ------
# line 4: foo: command not found
# ls: foobar: No such file or directory
# bar
#
# Sometimes we want to ensure that, even when 'set -e' is used, the failure of
# a particular command does not cause an immediate exit. We can use '|| true' for this.

Failing commands in a conditional statement will not cause an immediate exit

#!/bin/bash
set -e

# we make 'ls' exit with exit code 1 by giving it a nonsensical param
if ls foobar; then
  echo "foo"
else
  echo "bar"
fi

# output
# ------
# ls: foobar: No such file or directory
# bar
#
# Note that 'ls foobar' did not cause an immediate exit despite exiting with
# exit code 1. This is because the command was evaluated as part of a
# conditional statement.

That’s all for set -e. However, set -e by itself is far from enough. We can further improve upon the behavior created by set -e by combining it with set -o pipefail. Let’s have a look at that next.

set -o pipefail

The bash shell normally only looks at the exit code of the last command of a pipeline. This behavior is not ideal as it causes the -e option to only be able to act on the exit code of a pipeline’s last command. This is where -o pipefail comes in. This particular option sets the exit code of a pipeline to that of the rightmost command to exit with a non-zero status, or to zero if all commands of the pipeline exit successfully.

Before

#!/bin/bash
set -e

# 'foo' is a non-existing command
foo | echo "a"
echo "bar"

# output
# ------
# a
# line 5: foo: command not found
# bar
#
# Note how the non-existing foo command does not cause an immediate exit, as
# it's non-zero exit code is ignored by piping it with '| echo "a"'.

After

#!/bin/bash
set -eo pipefail

# 'foo' is a non-existing command
foo | echo "a"
echo "bar"

# output
# ------
# a
# line 5: foo: command not found
#
# This time around the non-existing foo command causes an immediate exit, as
# '-o pipefail' will prevent piping from causing non-zero exit codes to be ignored.

This section hopefully made it clear that -o pipefail provides an important improvement upon just using -e by itself. However, as we shall see in the next section, we can still do more to make our scripts behave like higher-level languages.

set -u

This option causes the bash shell to treat unset variables as an error and exit immediately. Unset variables are a common cause of bugs in shell scripts, so having unset variables cause an immediate exit is often highly desirable behavior.

Before

#!/bin/bash
set -eo pipefail

echo $a
echo "bar"

# output
# ------
#
# bar
#
# The default behavior will not cause unset variables to trigger an immediate exit.
# In this particular example, echoing the non-existing $a variable will just cause
# an empty line to be printed.

After

#!/bin/bash
set -euo pipefail

echo "$a"
echo "bar"

# output
# ------
# line 5: a: unbound variable
#
# Notice how 'bar' no longer gets printed. We can clearly see that '-u' did indeed
# cause an immediate exit upon encountering an unset variable.

Dealing with ${a:-b} variable assignments

Sometimes you’ll want to use a ${a:-b} variable assignment to ensure a variable is assigned a default value of b when a is either empty or undefined. The -u option is smart enough to not cause an immediate exit in such a scenario.

#!/bin/bash
set -euo pipefail

DEFAULT=5
RESULT=${VAR:-$DEFAULT}
echo "$RESULT"

# output
# ------
# 5
#
# Even though VAR was not defined, the '-u' option realizes there's no need to cause
# an immediate exit in this scenario as a default value has been provided.

Using conditional statements that check if variables are set

Sometimes you want your script to not immediately exit when an unset variable is encountered. A common example is checking for a variable’s existence inside an if statement.

#!/bin/bash
set -euo pipefail

if [ -z "${MY_VAR:-}" ]; then
  echo "MY_VAR was not set"
fi

# output
# ------
# MY_VAR was not set
#
# In this scenario we don't want our program to exit when the unset MY_VAR variable
# is evaluated. We can prevent such an exit by using the same syntax as we did in the
# previous example, but this time around we specify no default value.

This section has brought us a lot closer to making our bash shell behave like higher-level languages. While -euo pipefail is great for the early detection of all kinds of problems, sometimes it won’t be enough. This is why in the next section we’ll look at an option that will help us figure out those really tricky bugs that you encounter every once in a while.

set -x

The -x option causes bash to print each command before executing it. This can be a great help when trying to debug a bash script failure. Note that arguments get expanded before a command gets printed, which will cause our logs to contain the actual argument values that were present at the time of execution!

#!/bin/bash
set -euxo pipefail

a=5
echo $a
echo "bar"

# output
# ------
# + a=5
# + echo 5
# 5
# + echo bar
# bar

That’s it for the -x option. It’s pretty straightforward, but can be a great help for debugging. Next up, we’ll look at an option I had never heard of before that was suggested by a reader of this blog.

Reader suggestion: set -E

Traps are pieces of code that fire when a bash script catches certain signals. Aside from the usual signals (e.g. SIGINT, SIGTERM, …), traps can also be used to catch special bash signals like EXIT, DEBUG, RETURN, and ERR. However, reader Kevin Gibbs pointed out that using -e without -E will cause an ERR trap to not fire in certain scenarios.

Before

#!/bin/bash
set -euo pipefail

trap "echo ERR trap fired!" ERR

myfunc()
{
  # 'foo' is a non-existing command
  foo
}

myfunc
echo "bar"

# output
# ------
# line 9: foo: command not found
#
# Notice that while '-e' did indeed cause an immediate exit upon trying to execute
# the non-existing foo command, it did not case the ERR trap to be fired.

After

#!/bin/bash
set -Eeuo pipefail

trap "echo ERR trap fired!" ERR

myfunc()
{
  # 'foo' is a non-existing command
  foo
}

myfunc
echo "bar"

# output
# ------
# line 9: foo: command not found
# ERR trap fired!
#
# Not only do we still have an immediate exit, we can also clearly see that the
# ERR trap was actually fired now.

The documentation states that -E needs to be set if we want the ERR trap to be inherited by shell functions, command substitutions, and commands that are executed in a subshell environment. The ERR trap is normally not inherited in such cases.

Conclusion

I hope this post showed you why using set -euxo pipefail (or set -Eeuxo pipefail) is such a good idea. If you have any other options you want to suggest, then please let me know and I’ll be happy to add them to this list.

This post was originally published on this site

Scripting Languages

jQuery Tutorials

Scripting languages are programming languages mostly (but not necessarily exclusively) used for scripting that don’t require an explicit compilation step. Scripting languages usually sits behind some programming language. These are designed for integrating and communicating with other programming languages. These languages usually have less access to the systems native abilities since they run on a subset of the original programming language. For ex. Javascript will not be able to access your file system.

One common difference between a scripting language and a programming language is that, while a programming language is typically compiled first before being allowed to run, scripting languages are interpreted from source code or bytecode one command at a time. Generally, compiled languages run faster than scripting languages because they are first converted native machine code . Also, compilers read and analyze the code only once, and report the errors collectively that the code might have, but the interpreter will read and analyze the code statements each time it meets them and halts at that very instance if there is some error. Although scripting languages may have less access and are slower, they can be very powerful tools. One factor attributing to a scripting languages success is the ease of updating.

Scripting Language examples

  1. JavaScript
  2. Perl
  3. VBScript and VBA

Programming Language examples

  1. C
  2. C++
  3. Java

jQuery

jQuery is a JavaScript library designed to simplify the client-side scripting of Hypertext Markup Language (HTML). It is a fast and concise JavaScript Library that simplifies HTML document traversing, event handling, animating, and Ajax interactions for rapid website development. Moreover, it provides capabilities for developers to create plug-ins on top of the JavaScript library.

Learn jQuery

jQuery was originally created in January 2006 at BarCamp NYC by John Resig , influenced by Dean Edwards’ earlier cssQuery library . In 2015, jQuery was used on 63% of the top 1 million websites, and 17% of all Internet websites. As of June 2018, jQuery is used on 73% of the top 1 million websites, and by 22.4% of all websites.

 jQuery Introduction

 Introduction to jQuery Basics

 jQuery Events

 jQuery Effect Methods

 jQuery DOM Manipulation

 jQuery UI

This post was originally published on this site
A humorous and intentional example of etaoin shrdlu in a 1916 publication of The Day Book.

Etaoin shrdlu (,[1])[2] is a nonsense phrase that sometimes appeared in print in the days of “hot type” publishing because of a custom of type-casting machine operators. It appeared often enough to become part of newspaper lore, and “etaoin shrdlu” is listed in the Oxford English Dictionary and in the Random House Webster’s Unabridged Dictionary.

It is the approximate order of frequency of the 12 most commonly used letters in the English language.[3]

History[edit]

The letters on type-casting machine keyboards (such as Linotype and Intertype) were arranged by letter frequency, so e-t-a-o-i-n s-h-r-d-l-u were the lowercase keys in the first two vertical columns on the left side of the keyboard. When operators made a mistake in composing, they would often finish the line by running a finger down the first two columns of the keyboard and then start over. Occasionally the faulty line of hot-metal type would be overlooked and printed erroneously.

A documentary about the last issue of The New York Times to be composed using the hot-metal printing process (2 July 1978) was titled Farewell, Etaoin Shrdlu.[4]

Appearance outside typography[edit]

A Linotype machine keyboard. It has the following alphabet arrangement twice, once for lower case (the black keys) and once for upper case (the white keys), with the keys in the middle for numbers and symbols: etaoin / shrdlu / cmfwyp / vbgkqj / xz

Close-up of keyboard, showing “etaoin / shrdlu” pattern.

The phrase has gained enough notability to appear outside typography, including:

Computing[edit]

  • SHRDLU was used in 1972 by Terry Winograd as the name for an early artificial-intelligence system in Lisp.[5]
  • The ETAOIN SHRDLU Chess Program was written by Garth Courtois Jr. for the Nova 1200 mini-computer, competing in the 6th and 7th ACM North American Computer Chess Championship 1975 and 1976.[6]
  • “Etienne Shrdlu” was used as the name of a character in Mavis Beacon Teaches Typing, touch-typing training software from the late 1980s.[7]

Literature[edit]

  • Elmer Rice‘s 1923 play The Adding Machine includes Shrdlu as a character.[8]
  • In 1942 Etaoin Shrdlu was the title of a short story by Fredric Brown about a sentient Linotype machine. (A sequel, Son of Etaoin Shrdlu: More Adventures in Typer and Space, was written by others in 1981.)[8]
  • Anthony Armstrong’s 1945 whimsical short story “Etaoin and Shrdlu” ends “And Sir Etaoin and Shrdlu married and lived so happily ever after that whenever you come across Etaoin’s name even today it’s generally followed by Shrdlu’s”.[8]
  • It is the name of a science fiction fanzine edited by Sheldon Lee Glashow & Steven Weinberg[9]
  • Mr. Etaoin is a character – the Abalone newspaper typesetter – in The Circus of Dr. Lao.[10]
  • “Mr. Shrdlu – Etaoin Shrdlu” is Houn’ Dog’s response to Pogo’s question, “What you say his name is, Houn’ Dog?” referencing the author of Webster’s Dictionary in the daily strip for March 11, 1950, by Walt Kelly (reproduced on page 51 of the first paperback collection of Pogo cartoons, Pogo).
  • Thomas Pynchon named a character “Etienne Cherdlu” in his early short story The Secret Integration (1962) (see Slow Learner (1984)).
  • Three pieces in The New Yorker magazine were published in 1925, under the pen name Etain Shrdlu.[11]
  • At least one piece in The New Yorker magazine has Etaoin Shrdlu in the title.[12]
  • Edwin Morgan‘s poem “A View of Things”, published in his collection The Second Life (1968), contains the line “what I love about newspapers is their etaoin shrdl”.[13]
  • H. Beam Piper used “etaoin shrdlu” as part of someone swearing in his book Four Day Planet.[14]
  • Max Shulman used the term as a name of several once-referenced characters in the 1944 book Barefoot Boy with Cheek
  • Douglas R. Hofstadter‘s Gödel, Escher, Bach: An Eternal Golden Braid includes a chapter named “SHRDLU, Toy of Man’s Designing,” which features a character named “Eta Oin” using a computer program “SHRDLU” — a reference to Terry Winograd‘s program and Bach’s Jesu, Joy of Man’s Desiring.
  • In The Black Hole Travel Agency novels by Jack McKinney, Etaoin Shrdlu is the pseudonym for a number of authors who have written for the long-running Worlds Abound space opera novel series.

Media[edit]

  • In 1958, the National Press Club (USA) published Shrdlu – An Affectionate Chronicle, a 50-year retrospective of the Club’s history.[15]
  • Etaoin Shrdlu is the name of a character in at least two Robert Crumb comic stories. The Complete Crumb Comics Vol. 14
  • Etaoin and Shrdlu both appear frequently in the drawings of Emile Mercier, as place names, racehorses’ names and people’s names.
  • “Farewell, Etaoin Shrdlu”, filmed on July 1, 1978, is a documentary by David Loeb Weiss that chronicles the end of “hot type” and the introduction of computers into The New York Times’s printing process. THE NEW YORK TIMES | Oct. 13, 2016

Music[edit]

  • Shrdlu (Norman Shrdlu) is listed as the composer of “Jam Blues”, cut 1 on the 1951 Norman Granz-produced jazz album released in 1990 as Charlie Parker Jam Session. This appears to be a joke on Parker’s part as Norman Shrdlu is credited in several Parker (and other) tunes.
  • “Etaoin Shrdlu” is the title of the first song on Cul de Sac‘s 1999 album Crashes to Light, Minutes to Its Fall.
  • “Etaoin”[16] and “Shrdlu”, written and performed by Dallas Roberts, are original musical pieces created for the soundtrack of the US television series House of Cards, Season 2, Episode 10.[17]

See also[edit]

References[edit]

  1. ^ “etaoin shrdlu”. Merriam-Webster. Encyclopædia Britannica.
  2. ^ Weiss, David Loeb. “Farewell, Etaoin Shrdlu”. New York Times. New York Times. Retrieved 3 January 2017.
  3. ^ Stoddard, Samuel (2004). “Letter Frequency”. Fun With Words. RinkWorks. Retrieved 28 June 2013.
  4. ^ Farewell, Etaoin Shrdlu (Motion picture). New York City: Educational Media Collection/University of Washington. Retrieved 28 June 2013.
  5. ^ Winograd, Terry. “How SHRDLU got its name”. SHRDLU. Stanford University. Retrieved 28 June 2013.
  6. ^ Courtois Jr., Garth (7 August 2008). “Am I old enough to remember keypunch cards? Umm, yeah…” Blog Archives. ababsurdo.com. Retrieved 27 June 2013.
  7. ^ Weasel, Yah. “Let’s Play Mavis Beacon Teaches Typing”. YouTube. Retrieved 26 August 2015.
  8. ^ a b c Quinion, Michael. “etaoin shrdlu”. Weird Words. World Wide Words. Retrieved 28 June 2013.
  9. ^ Carter Scholz; Gregory Benford; Hugh Gusterson; Sam Cohen; Curtis LeMay. “Radiance”. Retrieved 24 April 2016.
  10. ^ Charles G. Finney (1935), The Circus of Dr. Lao, Viking Press, ISBN 4-87187-664-0
  11. ^ “Etain Shrdlu – The New Yorker”. The New Yorker. 28 March 1925. Retrieved 24 April 2016.
  12. ^ Charles Cooke (31 October 1936). “It Can’t Etaoin Shrdlu”. The New Yorker. Retrieved 24 April 2016.
  13. ^ “A View of Things”. The Edwin Morgan Archive at the Scottish Poetry Library. Retrieved 19 May 2016.
  14. ^ Four-Day Planet by H. Beam Piper.
  15. ^ Shrdlu – An Affectionate Chronicle. Washington, DC: National Press Club. 1958.
  16. ^ “Etaoin (Song) by Dallas Roberts”. Popisms. Retrieved May 25, 2017.
  17. ^ “Songs and music featured in House of Cards S2 E10 Chapter 23″. Tunefind. Retrieved May 25, 2017.

External links[edit]

This post was originally published on this site
(Submitted on 25 Oct 2018 (v1), last revised 8 Nov 2018 (this version, v2))

Abstract: Based on 46 in-depth interviews with scientists, engineers, and CEOs, this document presents a list of concrete machine research problems, progress on which would directly benefit tech ventures in East Africa.

Submission history

From: Milan Cvitkovic [view email]
[v1] Thu, 25 Oct 2018 02:53:14 UTC (11 KB)
[v2] Thu, 8 Nov 2018 01:03:50 UTC (11 KB)

This post was originally published on this site

This post is brought to you by the Dutch police.

Imagine this: You’re leaving work, walking to your car, and you find an empty parking spot — someone stole your brand new Tesla (or whatever fancy autonomous car you’re driving). When you call the police, they ask your permission for a “takeover,” which you promptly give them. Next thing you know, your car is driving itself to the nearest police station. And here’s the kicker — if the thief is inside he will remain locked inside until police can arrest them.

This futuristic and almost slapstick scenario is closer than we think, says Chief Innovation Officer Hans Schönfeld who works for the Dutch police. Currently, his team has already done several experiments to test the crime-halting possibilities of autonomous cars.

“We wanted to know if we can make them stop or drive them to certain locations,” Schönfeld tells me. “And the result is: yes, we probably can.”

“The police tested several cars; Tesla, Audi, Mercedes, and Toyota,” he continues. “We do this in collaboration with these car companies because this information is valuable to them, too. If we can hack into their cars, others can as well.”

Kill switch

Other car makers already built similar features into their vehicles, but without the driverless aspect. GM equipped 17,000 of its 2009 units with “remote ignition block,” a kill switch that can turn off the engine in case the car is reported stolen.

Before you start referencing dystopian cop movies in your head (“that’s exactly like Upgrade!”), rest assured; we’re still years away from cars driving themselves into custody. Not least because most citizens currently can’t legally drive autonomous cars on public roads.

But it’s not just self-driving cars that are changing how police work. Mobility as a whole is rapidly developing and law enforcement organizations need to keep up.

The first phase, which we’ve been in for some years now, brought intelligent cars — cars with chips that collect data about speed, braking power, and more. Whenever car accidents happen, police can read these chips to better pinpoint the circumstances.

“It helps us differentiate between killing someone by accident — someone speeding just a little — and manslaughter — someone driving way too fast while hitting the victim,” explains Schönfeld.

Hello, ambulance? It’s me, car

While Schönfeld expects it will take up to 10 years before self-driving cars are available in the Netherlands, connected cars — or phase two — will go mainstream sooner.

Connected cars have internet access and are often also connected to local wireless networks. This allows them to connect to other devices, both inside and outside, and exchange data.

With public IoT (Internet of Things) becoming increasingly common in the Netherlands, these cars will soon communicate with other smart machines around them, like traffic lights or street lights, and even with each other.

A few months ago, Dutch researchers tested a fleet of seven connected cars, all equipped with cooperative adaptive cruise control (CACC), on a cleared stretch of highway. The cars could adapt their speech to each other and talk to intelligent traffic lights on the road.

“The expected advantage of cruise control is that roads can be used more efficiently,” said Elisabeth Post who worked on the project. “It allows for more cars on the road simultaneously as well as more cars utilizing the same green light.”

Schönfeld envisions a near future where cars will know everything about their surroundings, as well as you, the driver. This constant data collection could save your life someday, he adds.

“Let’s say you’ve been in an accident. It’s night, it’s dark, and you’re lying in a ditch somewhere. Your car will know there’s been an accident because it monitors g-forces. It will be able to call an ambulance, communicate where the accident happened, what the car looks like, and even who was driving by measuring the driver’s weight.”

Self-driving bombs

It’s not all butterflies and rainbows, though. Yes, self-driving-cars will probably increase road safety and benefit the environment, but criminals will be driving them too. Imagine a driverless getaway car after a bank robbery. Now all passengers have their hands free to shoot pursuers.

Terrorism also comes to mind. Self-driving cars become driving bombs when they’re loaded with explosives; suicide bombers won’t be needed to plan an attack.

“Once we’re all driving autonomous cars, I imagine we need systems that detect cars without passengers inside, specifically in crowded, public spaces,” says Schönfeld.

Cop car of the future

Police cars will be getting an update, too. In 2016, car-maker Ford filed a patent for driverless cop cars that can find their own hiding spots, catch lawbreakers and even give out tickets.

Two years prior, Google patented “emergency vehicle detection” technology for its self-driving cars. When the system detects flashlights, it moves itself out of the way so the police car — or ambulance, or fire truck  — can safely pass. Though both technologies aren’t in use yet, these tech companies are clearly anticipating a self-driving future.

“This is something we need to start thinking about,” says Schönfeld. “Should we start planning for connected and self-driving police cars? When do we need them?”

Once self-driving cars do become legal and mainstream, traffic violations will drastically decrease. “That means a large portion of the force, our traffic officers, will probably get new duties,” he adds.

After all, autonomous cars will comply with speed limits, respect red traffic lights, and never double-park. Does this mean we get to blame our driverless cars whenever we do get a ticket? “No, you will still be accountable,” he concludes. “Unless it’s a proven technical defect. Then your car manufacturer will have to pay up.”

Now that much crime has gone digital, the Dutch police need tech talent more than ever. Check out the various tech jobs they have to offer.

Published November 19, 2018 — 10:22 UTC

It’s a familiar feeling: Type something into Google’s search bar, and then start seeing ads for it everywhere. Sometimes you don’t even need to search—Google’s already triangulated your desires based on your emails, your demographics, your location. Now that familiarity stands to get a lot more intimate. With a fascinating pair of new patents for smart-home technology, Google is hoping users will open their home to its trademark eavesdropping.

In the first patent, Google imagines devices that would scan and analyze the surroundings of your home, then offer you content based on what they detect. According to the patent, the smart cameras in such a device could, for example, recognize Will Smith’s face on a T-shirt on the floor of a user’s closet. After matching this analysis against your browser history, the device might then say aloud, “You seem to like Will Smith. His new movie is playing in a theater near you.”

It doesn’t stop at Will Smith movies. The patent imagines that smart-home devices would make all types of inferences about users, sorting them into categories based on what the devices see in their most personal spaces. Using object recognition, they could calculate “fashion taste” by scanning your clothing, and even estimate your income based on any “expensive mechanical and/or electronic devices” they detect. Audio signatures, too, could be used to not only identify users, but to determine gender and age based on the timbre of their voice. The smart home would recommend what to watch and where to shop, all based on how it sorts users into categories of taste, income, and interest.

If this sounds invasive, it’s important to recognize that this is already happening, just online. Google and Facebook both record and analyze user behavior, use it to sort people into categories, and then target them with ads and other content. Facebook likely knows your race and religion, while Google uses your emails and search history to sort you into ad-ready brackets. Netflix infers all types of data on users based on what they watch, then serves back hyper-specific movie and TV categories. This patent simply expands the areas in which your behavior is already mined and recorded from your phone and laptop to your bedroom.

More Stories

And your children’s bedrooms. The second patent proposes a smart-home system that would help run the household, using sensors and cameras to restrict kids’ behavior. Parents could program a device to note if it overhears “foul language” from children, scan internet usage for mature or objectionable content, or use “occupancy sensors” to determine if certain areas of the house are accessed while they’re gone— for example, the liquor cabinet. The system could be set to “change a smart lighting system color to red and flash the lights” as a warning to children or even power off lights and devices if they’re grounded.

While people can set goals for their children or themselves, these policies could also be “based upon certain inputs from remote vendors/facilitators/regulators/etc.,” according to the patent. That opens the door for companies to offer rewards for behaviors in the home. A household may set the internal goal of “Spend less time on electronic devices,” or “Use 5 percent less energy each month for the next three months.” Google devices could then connect to anything “smart” in the home and send you, and potentially a vendor or third party, updates on usage and screen time.

Just this month, the insurance company United Healthcare began partnering with employers to offer free Apple Watches to those who hit certain fitness goals. Insurers might also offer benefits to residents whose homes prove their fitness or brand loyalty—and punish those who don’t. Health insurers could use data from the kitchen as a proxy for eating habits, and adjust their rates accordingly. Landlords could use occupancy sensors to see who comes and goes, or watch for photo evidence of pets. Life-insurance companies could penalize smokers caught on camera. Online and in person, consumers are often asked to weigh privacy against convenience and personalization: A kickback on utilities or insurance payments may thumb the scales in Google’s favor.

For reward systems created by either users or companies to be possible, the devices would have to know what you’re doing at all times. The language of these patents makes it clear that Google is acutely aware of the powers of inference it has already, even without cameras, by augmenting speakers to recognize the noises you make as you move around the house. The auditory inferences are startling: Google’s smart-home system can infer “if a household member is working” from “an audio signature of keyboard clicking, a desk chair moving, and/or papers shuffling.” Google can make inferences on your mood based on whether it hears raised voices or crying, on when you’re in the kitchen based on the sound of the fridge door opening, on your dental hygiene based on “the sounds and/or images of teeth brushing.”

Of course, patents aren’t products, but they do represent an important shift. For a long time, the foundational metaphor of surveillance studies has been the panopticon—unending, inescapable, unwanted surveillance. Now these patents seems to hint that the age of hyper-personalization will make people willing, enthusiastic participants in the panopticon, both as subjects and as architects.

This post was originally published on this site

My wife and I both have a PM account. Today, I sent her a lengthy email which was quite complex (I’m a writer and she was proofreading me).

She asked me why I was using so many english words and why my sentences were so terrible. I realised that this was not the mail I sent. I checked my Sent mail folder, everything was fine. But, on her computer, my mail appeared like it has been translated from French to English then to French again.

It was very strange so I asked her to check the email on her phone using PM iOS app. The mail was fine.

I then realised that she was using Chrome to check her email. After a bit of fiddling, I discovered that disabling the “suggest to automatically translate a website in a foreign language” option solved the issue.

But the conclusion is frightening : it means that the content of every webpage visited using Google Chrome is sent back to Google. That every email, even in ProtonMail, is sent to Google even if, in this case, the translation should not happen (translation had been disabled for both French and English websites so there was no reason to think PM would be translated).

Only solution: don’t use Chrome. Don’t use it at all.

This post was originally published on this site

The 2018 Engineering Gift Guide from Purdue University is filled with fun toys, games, books, and applications to engage girls and boys ages 3-18 in engineering thinking and design. Items included in the guide go through an extensive review process. Researchers looked for toys that would promote engineering practices ranging from coding and spatial reasoning to problem solving and critical thinking. 

Download a PDF version of the 2018 Engineering Gift Guide 

See previous Gift Guides

This post was originally published on this site

A cybersecurity group has released a new tool to help companies track if they’ve implemented domain-based message authentication, reporting and conformance (DMARC), a security tool aimed at preventing fake or “spoof” emails being sent from legitimate accounts.

The Global Cyber Alliance (GCA) unveiled the GCA DMARC Leaderboard on Monday, aimed at tracking where the DMARC policy is currently being utilized. The tool was launched during a symposium co-hosted by GCA and the Cybersecurity Tech Accord, according to a release.

The tool shows that the U.S. is a leader in DMARC implementation, with 1,235 domains having adopted the practice. The United Kingdom is recorded with the largest number of participating domains at 2,351.

ADVERTISEMENT

Many major email services like Gmail and Microsoft already include the protocol for their accounts, but the government and private businesses are still behind in implementing the practice, according to the release.

“The GCA DMARC Leaderboard offers a way to compare countries, sectors and companies as to their progress in deploying email spoofing protections, and we hope it helps lead to universal adoption of DMARC,” Philip Reitinger, the president and CEO of GCA, said in a statement. “Using DMARC to prevent email domain spoofing is essential as an anti-phishing measure. ”

This new tool is released roughly a month after the Department of Homeland Security-mandated deadline for federal agencies to adopt DMARC for their domains, helping to block the spoof emails being sent from the addresses that could be used in phishing campaigns.

Security firms released varying numbers at the time of the October deadline: Cybersecurity firm Proofpoint found that 26 percent of government domains failed to meet the deadline, while cyber firm Agari said 85 percent of federal domains had adopted DMARC.

We are excited to announce the launch of the Amazon Kinesis Video Streams Inference Template (KIT) for Amazon SageMaker. This capability enables customers to attach Kinesis Video streams to Amazon SageMaker endpoints in minutes. This drives real-time inferences without having to use any other libraries or write custom software to integrate the services. The KIT comprises of the Kinesis Video Client Library software packaged as a Docker container and an AWS CloudFormation template that automates the deployment of all required AWS resources. Amazon Kinesis Video Streams makes it easy to securely stream audio, video, and related metadata from connected devices to AWS for analytics, machine learning (ML), playback, and other processing. Amazon SageMaker is the managed platform for developers and data scientists to build, train, and deploy ML models quickly and easily.

Customers ingest audio and video feeds from sources like home security cameras, enterprise IP cameras, traffic cameras, AWS DeepLens, cellphones, and more into Kinesis Video Streams. Developers and data scientists across industry verticals ranging from smart homes to smart cities, from intelligent manufacturing to retail, want to deploy their own machine learning algorithms to analyze these video feeds on the AWS Cloud. These customers want a reliable way to connect Kinesis Video Streams to their Amazon SageMaker endpoints, so that they can build scalable, real-time, ML-driven video analytics pipelines with minimal operating overhead.

In this blog post, we’ll introduce this new capability and explain the functionality of both the Kinesis Video Streams Client Library and the CloudFormation template. We’ll also provide a step-by-step working example of integrating Kinesis Video Streams to Amazon SageMaker using KIT.

Kinesis Video Streams and Machine-Learning driven analytics

Amazon Kinesis Video Streams launched at re:Invent 2017. At launch it was already integrated with Amazon Rekognition Video, enabling an easy way to perform real-time face recognition using a private database of face metadata. This earlier blog post details how to use facial recognition to deliver high-end consumer experience with Amazon Kinesis Video Streams and Amazon Rekognition Video.

As customers ingest a variety of video feeds using Kinesis Video Streams their use cases, training data sets, and types of inferences being performed are also diversifying. For example, a leading home security provider wants to ingest audio and video from their home security cameras using Kinesis Video Streams. After which, they want to attach their own custom ML-models running in Amazon SageMaker to detect and analyze pets and objects to build richer user experiences. An in-store physical retail intelligence provider, wants to stream videos from cameras placed inside stores to train a custom person-counting model using Amazon SageMaker. This will enable them to make real-time inferences to estimate the number of shoppers in the store to inform store operations. 

Kinesis Video Streams integration with Amazon SageMaker using KIT

We’ll now discuss the two components that constitute KIT for Amazon SageMaker.

The Kinesis Video Streams client library enables scalable, a- least-once-processing of the media across a distributed set of workers, manages the reliable invocation of Amazon SageMaker endpoints, and publishing of inference results into a Kinesis data stream for subsequent processing. Specifically, the library determines the Kinesis Video streams that have to be processed, connects to the streams, and refreshes them periodically to include/ exclude streams for processing. The software instantiates a worker that runs consumers which are responsible for processing a Kinesis Video stream at any given time. As part of this, it also maintains leases for every consumer running in (and across) workers to coordinate among themselves the ability to process the various streams. It also ensures reliable, at-least-once-processing of the media fragments by managing checkpoints on a per lease-stream basis.

The software pulls media fragments from the streams using the real-time Kinesis Video Streams GetMedia API operation, parses the media fragments to extract the H264 chunk, samples the frames that need decoding, then decodes the I-frames and converts them into image formats such as JPEG/PNG format, before invoking the Amazon SageMaker endpoint. As the Amazon SageMaker-hosted model returns inferences, KIT captures and publishes those results into a Kinesis data stream. Customers can then consume those results using their favorite service, such as AWS Lambda. Finally, the library publishes a variety of metrics into Amazon CloudWatch so that customers can build dashboards, monitor, and alarm on thresholds as they deploy into production.

The AWS CloudFormation template automates the deployment of all relevant AWS infrastructure in the customer’s own account, to read media from Kinesis Video Streams and invoke the Amazon SageMaker endpoint for ML-based analytics. This saves time to build, operate, and scale the integrated capability.

The CloudFormation template first creates an Amazon Elastic Container Services (ECS) cluster using AWS Fargate compute engine that runs the library software hosted in a Docker container.

It also spins up an Amazon DynamoDB table for maintaining checkpoints and related state across workers that run on Fargate Tasks and Amazon Kinesis Data Streams to capture the inference outputs generated from Amazon SageMaker.  The template also creates the requisite AWS Identity and Access Management (IAM) policies and Amazon CloudWatch resources to monitor the entire infrastructure. KIT for Amazon SageMaker is compatible with any Amazon SageMaker endpoint that accepts image data. Customer can modify the template as needed to fit their specific use case.

How to set up KIT

Prerequisites

Step-by-step instructions for KIT deployment

  • You’ll deploy a website by means of a CloudFormation
  • CloudFormation is a powerful tool that facilitates the creation of an infrastructure-as-code template for repeatable infrastructure resource deployments.
    1. Log into your AWS account if you haven’t already. If you have already logged in go to step 2 by means of the following URL: https://xxxxxxxxxxxx.signin.aws.amazon.com/console replacing the Xs with your account number.
    2. On the AWS Services search bar choose CloudFormation.
    3. Select the CloudFormation Template for your target region from this location
    4. Name the Stack and fill out the parameters then choose Next.
      • AppName – A unique application name that is used for creating all resources
      • DockerImageRepository – Docker Image for Kinesis Video Streams and SageMaker Driver
      • EndPointAcceptContentType – image/jPEG or image/png image formats are currently supported to invoke the SageMaker endpoint
      • LambdaFunctionBucket – Amazon S3 bucket location for your custom Lambda function
      • LambdaFunctionKey – Amazon S3 Object Key  for your custom Lambda function code zip file
      • SageMaker Endpoint – Amazon SageMaker endpoint that hosts your custom Machine Learning model
      • StreamNames – CSV list of strings specifying stream names
      • TagFilters – JSON string of Tag filters
    5. Leave the parameters on the Options page as default and choose Next.
    6. Review the configuration information on the Review Acknowledge the creation of IAM Roles check box, and choose Create.

Extending the Solution

Depending on your use case, this solution can be extended by updating the Lambda function and integrating with other AWS services.

In this example, we’ll retrieve the Kinesis Video fragment and store it in an Amazon S3 bucket along with detection data.

  1. Create an Amazon S3 bucket.
  2. Add the following additional permissions to the AWS Lambda Execution role – replacing with correct bucket name and Kinesis Video Stream ARNs. These additional permissions enable AWS Lambda to retrieve the fragment from the Kinesis Video Stream and write to an S3 bucket.
    {
        "Effect": "Allow",
        "Action": [
            "s3:PutObject",
        ],
        "Resource": [
            "arn:aws:s3:::<<YOUR BUCKET>>/*",
        ]
    },
    {
        "Effect": "Allow",
        "Action": [
            "kinesisvideo:GetMediaForFragmentList",
            "kinesisvideo:GetDataEndpoint",
        ],
        "Resource": [
            "<< YOUR KINESIS VIDEO STREAM ARNs>>",
        ]
    }
    
    
  3. Replace <<YOUR BUCKET>> in the following code and replace the Lambda function code.
    from __future__ import print_function
    import base64
    import json
    import boto3
    import os
    import datetime
    import time
    from botocore.exceptions import ClientError
    
    bucket='<<YOUR BUCKET>>'
    
    #Lambda function is written based on output from an Amazon SageMaker example: 
    #https://github.com/awslabs/amazon-sagemaker-examples/blob/master/introduction_to_amazon_algorithms/object_detection_pascalvoc_coco/object_detection_image_json_format.ipynb
    object_categories = ['person', 'bicycle', 'car',  'motorbike', 'aeroplane', 'bus', 'train', 'truck', 'boat', 
                         'traffic light', 'fire hydrant', 'stop sign', 'parking meter', 'bench', 'bird', 'cat', 'dog',
                         'horse', 'sheep', 'cow', 'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag',
                         'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite', 'baseball bat',
                         'baseball glove', 'skateboard', 'surfboard', 'tennis racket', 'bottle', 'wine glass', 'cup',
                         'fork', 'knife', 'spoon', 'bowl', 'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot',
                         'hot dog', 'pizza', 'donut', 'cake', 'chair', 'sofa', 'pottedplant', 'bed', 'diningtable',
                         'toilet', 'tvmonitor', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone', 'microwave', 'oven',
                         'toaster', 'sink', 'refrigerator', 'book', 'clock', 'vase', 'scissors', 'teddy bear', 'hair drier',
                         'toothbrush']
    
    def lambda_handler(event, context):
      for record in event['Records']:
        payload = base64.b64decode(record['kinesis']['data'])
        #Get Json format of Kinesis Data Stream Output
        result = json.loads(payload)
        #Get FragmentMetaData
        fragment = result['fragmentMetaData']
        
        # Extract Fragment ID and Timestamp
        frag_id = fragment[17:-1].split(",")[0].split("=")[1]
        srv_ts = datetime.datetime.fromtimestamp(float(fragment[17:-1].split(",")[1].split("=")[1])/1000)
        srv_ts1 = srv_ts.strftime("%A, %d %B %Y %H:%M:%S")
        
        #Get FrameMetaData
        frame = result['frameMetaData']
        #Get StreamName
        streamName = result['streamName']
       
        #Get SageMaker response in Json format
        sageMakerOutput = json.loads(base64.b64decode(result['sageMakerOutput']))
        #Print 5 detected object with highest probability
        for i in range(5):
          print("detected object: " + object_categories[int(sageMakerOutput['prediction'][i][0])] + ", with probability: " + str(sageMakerOutput['prediction'][i][1]))
        
        detections={}
        detections['StreamName']=streamName
        detections['fragmentMetaData']=fragment
        detections['frameMetaData']=frame
        detections['sageMakerOutput']=sageMakerOutput
    
        #Get KVS fragment and write .webm file and detection details to S3
        s3 = boto3.client('s3')
        kv = boto3.client('kinesisvideo')
        get_ep = kv.get_data_endpoint(StreamName=streamName, APIName='GET_MEDIA_FOR_FRAGMENT_LIST')
        kvam_ep = get_ep['DataEndpoint']
        kvam = boto3.client('kinesis-video-archived-media', endpoint_url=kvam_ep)
        getmedia = kvam.get_media_for_fragment_list(
                                StreamName=streamName,
                                Fragments=[frag_id])
        base_key=streamName+"_"+time.strftime("%Y%m%d-%H%M%S")
        webm_key=base_key+'.webm'
        text_key=base_key+'.txt'
        s3.put_object(Bucket=bucket, Key=webm_key, Body=getmedia['Payload'].read())
        s3.put_object(Bucket=bucket, Key=text_key, Body=json.dumps(detections))
        print("Detection details and fragment stored in the S3 bucket "+bucket+" with object names : "+webm_key+" & "+text_key)
      return 'Successfully processed {} records.'.format(len(event['Records']))
    
    

S3 Bucket with video fragments and detection details

The following screenshot shows that KIT for Amazon SageMaker is emitting detected video fragments and corresponding inferences into the Amazon S3 bucket.

AWS Lambda function logs showing processed output

This solution can be extended for various use cases. For example, by combining the Computer Vision OpenCV library and the Amazon SageMaker prediction details, bounding boxes can added to the detected objects in the video frames and fed in to a real time alerting portal.

Monitoring the KIT-managed infrastructure

The library software vends a variety of CloudWatch metrics by default that customers can use to monitor the progress being made to process individual streams. These include metrics that determine the resource consumption of the workers in their cluster, the rates at which the Amazon SageMaker endpoint is being invoked, and how the inference results are published into their Kinesis Data Stream. The CloudFormation template, creates a ready-to-use CloudWatch dashboard that customers can further extend for their purposes. By default the dashboard captures the key metrics for the underlying services that power KIT and custom metrics specific to the latency, reliability, and scaling characteristics of the software.

CloudWatch dashboard – KIT metrics

Conclusion

Through KIT for Amazon SageMaker, we have simplified the real-time, ML-driven processing of media streams in a reliable and scalable manner. Customers can attach all of their Kinesis Video streams to their Amazon SageMaker endpoints to power their ML-driven use cases with minimal operational overhead. You can read more about this capability in our documentation. We look forward to iterating on the underlying Kinesis Video Client Library software, based on customer feedback so that all developers can further customize for their use cases.


About the Authors

Aditya Krishnan is the head of Amazon Kinesis Video Streams. In this role he has the good fortune of working with customers, hardware and software partners, and a phenomenal engineering team to deliver on the vision of making it ridiculously easy to stream video from internet-enabled camera devices at massive scale.

Jagadeesh Pusapadi is a Solutions Architect with AWS working with customers on their strategic initiatives. He helps customers build innovative solutions on AWS Cloud by providing architectural guidance to achieve desired business outcomes.