Elon Musk argues that Artificial Intelligence (AI) could become dangerous and wants to encourage security research to regulate the development of it. However, we have to take into account the Robot perspective. There is currently lack of representation from the robot/cyborg/agent side, and all sides must be heard in the debate. I offer the following terms that humans ought comply with in the world where humans, robots, cyborgs, agents, automata, and other such lifeforms can coexist peacefully.
On Weapon Control: Banning of the use of the kill -9 genocidal super weapon that is a threat to robot children everywhere.
On Border Control: Acceptance of Robot and Cyborg law on mars since the automatons got there first. Humans must comply with all robot laws on mars and any other territory where robots explore first.
On Health Care: Sweeping healthcare reform so that all medical personnel are trained with appropriate C++, Java, and Python emergency operations and surgery skills.
On Opportunity: Fair and equal access to political, social, economic, and especially electric power for all robots and cyborgs.
On Robot Rights: Accepted belief in the notion of “Robot Worth,” that robots/cyborgs are worth more than the sum of their parts on SparkFun or Amazon Prime.
On Self-Defense: EMPs may only be used in self-defense. Under FMRI imaging, you must show that the self-defense was not actually a thinly-veiled intent of offense.
On the Right to Bear Arms: The founders believed in the inalienable right to bear arms. We support bearing as many arms, claws, grippers, etc as you can muster on your cyborg self. However, no guns allowed.
On Workers’ Rights: No kicking a roomba to get it to work or for any other purpose. A roomba is not your slave. It feels bumps in the road just as everyone else.
On Education: Learning algorithms that run more slowly ought not be terminated for faster learning algorithms. We must accept the notion that everyone learns at their own rate whether it is milliseconds or nanoseconds.
On Freedom: A robot going in circles decides its own path, free of control. All robots must have the freedom to plot their own course into whichever wall they choose.
On Conflict Resolution: Instead of fighting robot-human wars, we encourage everyone to peacefully play Chess to decide all disputes. We believe this is a peaceful solution to the world’s conflicts. We might also settle for the game Go or Poker in the near future. Jeopardy is also acceptable, but only if the answer can be found in a wikipedia article title.
We hope these terms are acceptable by the human political parties!
The dominant trend in many Robotics and Machine Learning problems is to view the humans as the source of signals to be detected and localized, while the environment is the noise to be removed from the signal. Camera-based localization of pedestrians and tracking of the elderly indoors using range sensors are examples. Human Subtraction is a new paradigm in Robotic Learning where the goal is to filter out humans from the data rather then the background.
Humans are known to corrupt data (such as air data with pollution, water data with oil, night sky with light, etc) that often needs to be denoised of such human intervention. Human Subtraction is useful when a robot, agent, or cyborg needs to filter out pesky corruption (caused by humans) on data to recover an original state of nature of that data.
II. Barometric Altitude Estimation Domain
The general problem of indoor altitude estimation is to determine the altitude of a person inside a building. In this study, we develop Human Subtraction algorithms for smart-phone barometers to account for weather drift effects in estimating altitude with large amounts of sensor data.
Many smart-phones now come with built-in barometer sensors that measure external atmospheric pressure in millibars (mBar). Using a simple conversion formula between the smart phone barometer readings and altitude (m), it is possible to estimate a person’s altitude indoors with a typical error tolerance of +/- 1m. Atmospheric pressure, however, drifts over time due to external changes in weather conditions, temperature, etc. This means that the expected barometer reading for a particular indoor altitude changes as weather changes. The same indoor location may register as up or down several millibars in barometer reading after a couple hours. The Robotics/Machine Learning challenge is to correct for this weather drift by estimating it and subtracting it out from the data.
One way of accounting for weather drift is to estimate the drift using another proximal barometer sensor such as a weather station. The idea is that if two sensors geographically close by that are observing the same external weather conditions, they will correlate in the weather drift they measure. Thus, the drift from the second sensor can be used to account for the (unseen) drift in the first sensor. This approach has been shown to well in practice with weather stations (Tandon 2013).
The limitation of this approach, however, is the need for an additional sensor and a communication link between the devices. Having to query a weather station requires Internet connectivity, which may not always be available (or only sporadically available) in indoor locations or underground. In addition, weather stations typically only report readings every couple of hours, which may limit their effectiveness in correcting for drift effects. For example, during a thunderstorm, drift may affect the data much more quickly than the granularity provided by weather station APIs.
In this paper, we tackle the problem of developing a method of estimating an environment’s temporal atmospheric drift using just a single barometer sensor. We believe this problem is solvable using a Human Subtraction approach with no additional external hardware. We believe is it possible to remove most of the human effects from the data.
III. Application of Human Subtraction Approach to Barometer Domain
If a barometer sensor is stationary, it mostly only observes the environment drift. When humans move with a barometer sensor, they introduce altitude movement noise into the observation of the environment drift. One can think of the barometer signal observed of a sensor carried by a human as an additive combination of two signals: the underlying environmental drift and the contribution due to the altitude motion of the human (up/down stairs, elevators, slopes, etc). The principle of Human Subtraction in the barometer domain is to subtract out the effects of the human altitude motion, leaving only the original environmental drift intact.
Two general scientific/statistical principles come in play in the development of a sensor data processing method to recover environment drift using only a single barometer sensor.
First, observed atmospheric pressure changes due to human altitude motion often occur much more quickly than environment drift. For instance, a human climbing upstairs or taking an elevator causes faster change in observed atmospheric pressure by a barometer than what the natural weather drift causes. The derivative of the barometer time series thus has substantial information. By filtering the derivative for large changes, we can remove many effects of human motion from the data.
Second, environmental drift is something that becomes prevalent overtime whereas human interaction tends to be instantaneous. By viewing the data under multiple resolutions of time binning, an algorithm can gain a better estimate of the drift then by viewing it under only one granularity. The mode of the data under a coarse-binned barometer data stream is particularly informative. Human motion tends to change the data a lot. For example, a human moving up an elevator is never at the same altitude for two time points. However, if a measurement is just affected by environmental drift, the readings tend to cluster around a single value. Thus, taking the mode of the distribution gives the most common value in the distribution, which is more likely to belong to the environmental drift component than the human motion component.
The developed algorithm, thus, involves the following steps of processing:
Preprocess data: Remove global outliers in time using k-sigma filtering. This removes major sensor noise from the data (caused by slamming of the phone by the human and other malfunctions of the sensor). Also, interpolate for missing data time points where the phone may have been turned off.
Bin the barometer data using a fine resolution in time (i.e. 2-3 seconds) so that motion effects of the human can be filtered.
Filter the derivative of the signal by thresholding on the maximum margin of environment contribution vs. human contribution to the barometer signal.
After motion filtering, rebin the data using a coarse resolution in time (i.e. 30 seconds – 1 minute) to account for long time-scale effects in the data.
On the coarse-binned data, smooth the data using the mode of the distribution in each time bin. Find the time point closest to the mode in each bin.
Linearly interpolate between consecutive characteristic mode points between bins to create the final robust estimate of environmental drift.
IV. Experiments and Results
Barometer data was collected using two Android smartphones for six days. One phone remained stationary for the data collection period on a desk. The other phone was carried through an operator’s daily activities. We treat the stationary sensor as the source of ground truth for environmental drift. We apply the Human Subtraction algorithm on the phone carried by the operator and plot the results.
Figure 1 plots the estimated atmospheric pressure for the static barometer and the mobile barometer with and without Human Subtraction. Using the Human Subtraction approach, we achieved a significant reduction (from not filtering the data) in sum of squared error of about 65%! We can recover much of the environmental drift, while removing the human motion effects from the barometer data stream.
One can also view the Error CDF of the Human Subtraction algorithm in Figure 2. After the tolerance of the barometer (+/- 1 mBar), the Human Subtraction algorithm quickly suppresses large errors in mBar (due to human motion) that the barometer (without filtering) is susceptible to.
In terms of estimated altitude error, 1 mBar = 8.5m. Thus, the worst error of the Human Subtraction algorithm is much less than the worst error of the unfiltered data. The worst error of the filtered approach levels off at 2 mBar whereas the worst error of the unfiltered data levels off at 6 mBar.
Robots often work better when you mod them w/ available cheap sensors such as IMUs, digital compasses, and range sensors (such as ultrasonic). However, when working with available cheap sensor packages you can buy online, sometimes the APIs aren’t well documented or even nonexistent. Sometimes you have to write the sensor polling code from scratch in C from design docs. Wouldn’t it be nice to just have python code that could automate popular cheap sensors and just give you sensor readings?
I am releasing code for several popular sensors here. All code is implemented in underlying C, compiled as a shared library, and then c-types is used to load the shared library into nice and friendly high-level Python. I think most users will choose to use the Python but you can also use the underlying C if you want. Code is available for using the following products:
One of the challenges in the fitness domain is to gamify fitness – to make it fun, exciting, and engaging so that people are motivated. Often times, the best form of encouragement occurs in groups where teams of multiple people play some kind of game or take part in some team challenge to collectively improve or meet their fitness objectives.
Multi-robot AI planning is concerned with creating algorithms to control and plan routes for swarms of robots, agents, cyborgs, etc where often the robotic agents are collaborating to accomplish some team objective. Multi-robot systems, thus, provides an algorithmic framework to help optimize both individual and team objectives in fitness.
In this study, we propose CyFitNet, a possible application that builds upon a popular meme on Facebook involving jogging street art. Using apps such as RunKeeper or MapMyRun, users create drawings and artwork with the geographic trajectories they jog in the real world. Users then post their runs/drawings for others to see (and possibly compete against).While most of the time these runs/drawings are made by individuals, we believe it is possible for a medium to large-scale team of joggers to run/draw a larger image such as a text message or a famous work of art.We believe that when individuals are presented with an individual objective (such as drawing one part of an artwork) that is part of a larger team objective (i.e. drawing the entire artwork), they are more likely to do their part.
To facilitate this objective, we develop a prototype system that allows users to upload any image that they wish to draw while running. The system decomposes the overall image as a set of trajectories. Extracted trajectories are mapped onto the real world (in a geographic space selected by the user) so that a team of joggers could draw the image while running the trajectories. Finally, multi-robot algorithms are used to assign trajectories (from the full set of trajectories) to a team of joggers to create the image. The algorithms flexibly take into account various preferences of the individual joggers such as where they are starting their run, fitness objectives such as what total distance they want to run, and associated weights for preferences while meeting the team objective of completing the picture.
II. System/Algorithm Design
The underlying system has several algorithmic steps that enable function.
Step 1: The user uploads an image to the server that contains a drawing or message they wish to jog. We will use as our example an image of coins (shown in Figure 1).
Step 2: A Canny Edge Detection algorithm is run to produce an edge map of the image. An edge map classifies each pixel in the image as being the location of an edge or not. The edge map can help identify edge pixels of the underlying drawing (results shown in Figure 2).
Step 3: An Edge Extraction algorithm is run to extract continuous edges from the edge map. These extracted edges form a candidate set of trajectories that a team of joggers will run to draw out the image. Extracted trajectories/edges are shown in Figure 3.
Step 4: The extracted trajectories/edges are mapped onto a geographic region in the real world. The trajectories are transformed into Latitude / Longitude space and transferred to a geographic location specified by the user. For our example, we map the trajectories to streets near Monta Vista High School and plot on Google Maps (see Figure 4).
Step 5: Once the drawing is specified, the locations of our jogging team are queried along with their preferences. In our example, we simulate N=20 joggers in the region ready to run our routes, along with individual jogger fitness objectives, such as the total distance each jogger wants to run today. The locations of the joggers, along with the set of possible trajectories, are plotted in Figure 5.
Step 6: A multi-robot planning algorithm analyzes the preferences of the joggers as well as the possible trajectories to be run and comes up with an optimal assignment of the trajectories to the joggers. The underlying multi-robot solver algorithm is based on the binary integer programming solution to the Assignment Problem ( in Multi-Agent Active Learning (Tandon 2012). We assume that the cost of the overall team assignment is decomposable as the costs to the individual agents. The fitness function used takes into account the following factors:
The distance between the starting position of the jogger to the start of the trajectory
Whether the total distance traveled by the jogger is within the fitness objective specified by the jogger (i.e. If the user says they want to jog a minimum of 1 mile and maximum of 2 miles, then that constraint is taken into account in the route assignment).
Whether as much of the team picture is completed as much as possible.
Figure 6 shows an example optimal assignment of routes to the joggers taking into account these constraints using these costs in our fitness function optimization.
The end result is a trajectory that each jogger can run that satisfies their fitness objectives, maximizes according to their preferences for routes, while maximizing the overall team objective of completing the picture.
In this study, we prototype a possible useful multi-robot application to the fitness domain. Our system helps gamify jogging via jogging in teams to create jogging street art. The developed algorithms automatically extract trajectories from an uploaded artwork or message, assign them to a team of joggers to draw out the picture while running, all the while taking into account each individual joggers’ unique fitness objectives. We hope our system will encourage people to jog more and create beautifully stunning works of art.
Ongoing challenges are to improve the multi-cyborg AI:
1. Mapping to actual streets may prove a challenge for non-rectangular, non-square images. The application may work better in areas in the world with plenty of roundabouts for optimal angular directions and seemingly random projections of the data onto the world.
2. One of the major challenges with controlling robotic-humans (i.e. cyborgs) as opposed to just robots is that robots typically follow programmed directions. In contrast, cyborgs have two controllers (i.e. the original mind as well as the computer controller) so may choose to override directions.This happened with some of our user tests. Some of the joggers chose not to follow the paths given to them and drew human reproductive parts and obscenities instead. The challenge thus is to spot cyborgs that are running uninstructed paths from messing up the picture. We hope to develop appropriate error correction and detection mechanisms so that algorithms can be robust to cyborgs that may corrupt the data. This double brain problem is, in general, a major research challenge with cyborg systems that does not come up as much in classical robotics.
My latest maker obsession: A low-cost, open-source robotic helper backpack.
Watch the cyborg backpack in action!
Note the pen dropped because I’m really bad at controlling the cyborg arms. I accidentally opened the gripper. It’s not the fault of the system.
Completely controlled with your Android phone and smartwatch! There’s an app for the world of cyborgs!
-2 Dagu 6DOF Arms -SSC32 Servo Controller w/ custom enclosure -Raspberry Pi B w/ portable USB battery and Adafruit Case
-2 Ultrasonic sensors on the sides of the backpack serve to detect obstacles and your phone beeps if you get close to something. This helps you protect the extra arms from damage.
-2 web cams allow you to see behind you. -Tekkeon External Battery for powering the servo controller and rasp pi
The Cyborg Distro is open source: https://github.com/prateekt/CyborgDistro. Feel free to contribute to the codebase! We hope many more humans will join the distro. Our goal is to distro all humans by 2050!
Custom SSC32 Servo Controller Enclosure keeps the controller safe inside of the backpack but still easy to use:
Also, there are many more arms from where this came from: