Affective Computing

Attended the Vlab Affective Computing event at the Stanford Business School on May 21, 2015. Zavin Dar, senior associate at Lux Capital moderated and provided an overview of the industry. #vlabAC
  • The three emotion analysis pioneers are Carl-Herman Hjortsjö who first classified human facial movements, that was later adopted on by the psychologist Paul Ekman in developing the Facial Action Coding System. Ekman subsequently creates the Atlas of Emotions with over 10,000 facial expressions. The term Affective Computing was developed by Rosalind Picard of the MIT Media Lab.
  • Can we decipher what emotions really are? Affective Computing digests a scene, processes it using AI, then displays / manipulates it
  • We spend $32 billion a year in consumer research
  • We convey emotions by vocal, gestural, emotional, muscle, skin, heart, and brain.
  • They cited the New York Times best seller book Predictably Irrational by Dan Ariely.
  • There are many applications for affective computing including education, wellness, healthcare, customer service, advertising, market research and entertainment.
Yuval Mor, CEO of Beyond Verbal spoke about his company:
  • As husbands know, it is not what your wife says, but how she says it, that indicates whether she is happy and relaxed or tense and stressed. Babies can sense this universal language.
  • It is hard to get sensing working in the wild, we are not there yet. As a field, we need to spark the imagination of people.
  • We are giving machines the ability to truly listen.
  • Emotional analysis is language and cultural independent. We have 1.5 million samples from 40 languages to prove it.
  • In essence we can do realtime emotion analytics and discovery.
  • The next phase for our field is listening to our body.
  • We can provide continuous and passive monitoring of Parkinson’s Disease patients by sensing the tone of their voice. It can be used as a device for monitoring pain and overall well being.
  • We can monitor kids with AD behavior and remind parents of appropriate coping behavior skills.
  • How we respond to content. Many people think is unnatural to think about our emotional state and what to do about it. We provide tools that are used to help you get better.
  • It is valuable to know how large crowds react, we can do automatic analysis of the emotions of everyone in the auditorium as they react to what is being said.
  • Our field needs to focus on narrow niches initially, but eventually these techniques will become ubiquitous.
  • Why do some ads work well and others flop? What types of ad are halfway effective?
  • Men are good at sensing emotions in others, but are poor at sensing their own emotions.
The other members of the panel with Rana el Kaliouby, co-founder and chief strategy science officer of Affective, Ken Denman, president and CEO of Emotient, and Mary Czerwinski, principal researcher and manager at Microsoft.
  • Mary designed ways of sending information from a wrist sensor on an autistic child to the cellphones of parents and other caretakers so they could know about the stress their children were under and respond accordingly. She has been in the field for four years.
  • Japanese are very guarded in public, their faces show much more emotion when they are in private.
  • Emotient has raised $20 million. Ken’s daughter is 25 and commented that there wasn’t enough color in a bright blue poster.
  • Different technologies are complementary.
  • Women smile 40% more than men in the United States, whereas there is no different in the United Kingdom.
  • Affective computing apps today are not fun. They need to be infused with social media to be more widely adopted.
  • There is no killer application. It will take time for this technology to happen. We need to validate the space.
  • We can use your cell phone camera to automatically identify your eyes and mouth, and from this, extract data points to extract facial expressions.
  • We can provide objective measurement of patient pain and depression. There are also PTS (post traumatic stress) and concussion diagnosis applications.
  • You have to ask yourself these questions. Are you solving a real problem? How will you deliver value? You will get a lot of conflicting advice, so ultimately you’ll have to go with your gut feeling.

Stanford Computer Science 50th Anniversary

The Stanford Computer Science 50th anniversary was held on April 28, 2015 at the Arrillaga Alumni Center. Alex Aiken, the computer science chair started with an overview of the department:
  • The department was started in 1965 by hiring John McCarthy.
  • We have 53 faculty and 20 joint appointments. We hired 6 faculty in the last 12 months, 4 of whom were senior positions.
  • We have 700 undergraduate (junior and senior) computer science majors, 28% female. The masters program has 400 people and has doubled over the last 7 years. We have 193 Ph.D. students.
  • Nerd Nation has become a high social status group term.
  • Some challenges in computer science in the future: (1) computational thinking, (2) power constrains—limits to how much power a chip can consume, (3) slow motion revolution software stack, (4) increasing human / machine interactions.
Jure Leskovsc spoke on Web: A Technological and a Social Network.
  • The development of the world wide web was equivalent to the discovery of a new continent.
  • We need to understand user behavior better.
  • My research has investigated how to create incentives using badges, and their ability to influence and direct behavior
  • One side result of our research has been the ability to identify trolls, sometimes with as few as ten interactions.
Dan Boneh spoke on Computer Security: Is It Getting Better or Worse?
  • Computer security will always be needed, and there will be an increased need for it as the number of IoT (Internet of Things) sensors, and mobile devices expand.
  • Many sensors on a phone can be accessed without permission.
  • There are many opportunities for abuse. For example, defects in a camera sensor can be used to identify the unique phone that took a picture.
  • Accelerometers have unique errors, none of them register exactly the same measure of gravity when at rest.
  • A gyroscope has a sampling rate of 200 Hz. You can use it to pick up speech—not perfectly, but you can obtain information from it.
  • The power meter in your phone fluctuates as a function of how far you are from a cell tower and the obstacles between you and it. If you have mapped this previously, and you collect power measurements as you move, you can create a map showing the path you’ve taken.
  • Machine learning is essential to making use of this sensor information.
  • The other area of research for myself are the cryptographic wars. The first was in 1975 to 1976 over the ability to publish research. The second was in 1994 to 1995 over the Clipper chip that was addressed by double and triple encryption. We are currently going through a third war with various law enforcement and national security agencies trying to add back doors to devices.
Fei-Fei Li spoke on A Quest for Visual Intelligence in Computers.
  • It is a challenge to unify / make sense of what we are seeing. If you want a machine to think, you need for it to understand what it is seeing.
  • 50% of our brain is used to process visual information and recognize people and objects.
  • We only learn what is a cat by seeing multiple cats. Children learn by exposure, every 200 sec they are processing a new image.
  • Imagenet is a huge database of images. 48,940 workers in 167 countries have cataloged 15 million images into 22 thousand categories.
  • We developed a coevolutionary neural network with 24 million nodes and 140 million parameters. It does automatic analysis of objects in images.
  • Another area of research is we are able to automatically use surveillance cameras to recognize the brand of car in an area. We have created maps that show the correlation between car price and increased crime rate and presidential voting.
Balaji Prabhakar spoke on The Data Science of Getting from Here to There.
  • My research has been focused on collecting and using data for reducing traffic congestion. The cost of additional fuel and time due to congestion was estimated at $115 billion in 2007. It accounts for 27% of U.S. emissions.
  • We have investigated how to reduce demand by using incentives, as opposed to penalties. For example, we pay a $50 incentive per car per year for driving at off-peak times.
  • By using big data and mobile apps, we believe we can increase supply.
  • We have been investigating when, where and why congestion occurs, a space-time engine.
  • Look at things that move, and places where people tap-in and tap-out. It is a big, massive jigsaw puzzle.
Michael Bernstein spoke on Crowdsourcing: A Meeting of Minds
  • How might many small tasks and people enable us to tackle very large problems
  • Can we detect lying? What is the proper design process for crowd sourcing?
  • Comparing a crowd of paid experts with a flash team. We currently can’t create a product like an iPhone with a crowd, but maybe someday we will.
Tim Roughgaden spoke on Current and Future Challenges in Theoretical Computer Science.
  • We have hired 5 faculty in the last 5 years expanding our department.
  • P vs. NP problems. P can be efficiently solved. A NP problem can be efficiently checked. Is P = NT or not? Can we solve it in 10 years? No, but maybe we will have made partial progress on it.
  • Map-reduce, Hadoop, Spark are examples of having many computers working on slices of data.
  • What can or cannot be done? Take matrix multiplication. There are problems that require n*n to solve.
  • We have been working on FCC auctions where the objective is to buy low and sell high. You want to make sure that there is no incentive to game the auction. In order to buy back spectrum, you need to solve an NP problem in less than half a second.
Mehran Sahami spoke on Expanding the Frontiers of Computer Science Education.
  • 2007 had 50% of the computer science students of 2000. Today we are back to 2000 levels. Peak enrollment was in 2001 and the low was 2006. In 2003, job off-shoring gained a lot of media attention.
  • Systems, Artificial Intelligence and theory are core computer science topics, but there are many peripheral areas. The dilemma is that in four years, there is a limited number of classes that can be taken.
  • Today we have pre-medicine students majoring in computer science. Computer science has become the largest undergraduate major at Stanford.
  • The 28% of female students today equals the total enrollment six years ago. Even more encouraging is that for our introduction to computer science class is 45% female.
  • We are currently teaching 65,000 units of computer science. This is twice what we taught six years ago. Despite this increase in units, the faculty has had less than a 2X increase.
  • The big test of computer science will be the changing of the curriculum war
Marissa Mayer spoke on Applied Computer Science Solutions in the Digital World.
  • She started out as a pre-med major, but ended up majoring in computer science in getting her masters. It took her multiple years to get her masters because she was doing a lot of TAing.
  • We have moved from Yahoo’s directory to search. But search is still in its infancy. For example, we look at satellite images to determine the amount of deforestation.
  • How do we strike the right balance between privacy and security.
  • From Eric Schmidt at Google, I learned that an executive doesn’t actually do anything. Rather they just help the team get things done.
  • There are a lot of choices and many shades of gray, most decisions aren’t in black and white.
  • How do you make decisions? There are a few decisions that are very daunting; you have to make them absolutely perfectly.
  • How do you strike a work-life balance? There is a rhythm to how people work.
  • We are not doing a good enough job of getting people to take computer science. It is a fundamental skill.
  • To my surprise, I was a campus icon; the only blond in upper division computer science classes.
  • I allowed having a profile of myself in Vogue. I saw it as a way to help nudge women to consider a career in computer science.
Kara Swisher moderated a panel with Jerry Yang, Ramji Srinivasan, Clara Shih, and Sam Altman on the impact of the Stanford Computer Science department on Silicon Valley.
  • Sam Altman’s startup was just something to do with friends, he just fell into it.
  • Clara Shin noted that VCs and startup are changing the world. They have made it okay to be a geek. She was a Mayfield Fellow before founding Hearsay Social.
  • Ramji initially went to work with Morgan Stanley on Wall Street. He notes that Silicon Valley is still the primary place to do startups—it has the mindset to do this since the 1950s, similar to how Los Angeles is the center of the entertainment industry. Silicon Valley has gone through multiple cycles of boom and bust. Housing prices are driving founders away from the area.
  • The types of jobs created by startups are very skilled. What we are seeing is the integration of computer science with industries such as healthcare, education, physics and chemistry.
  • When looking to fill a position, you should have your recruiter come up with at least one female and one male candidate. Women have a harder time getting more money and position. you need to hire women early at a company—nobody wants to be the first female employee.
  • Education and healthcare are two sectors that are ripe for disruption.
  • You want a human friendly, super human artificial intelligence
  • We are on the brink of curing cancer and various neurological diseases.
  • While intelligence has its limits, stupidity is limitless.
John Markoff moderated a panel with distinguished Stanford alumni, Mike Schroepfer, Vint Cerf, David Shaw and Bill Coughran discussing their careers and future directions.
  • SAIL was a very strange environment, e.g. its Prancing Pony vending machine. (You could bet double or nothing and get two vending items or nothing for your money).
  • Computer science has developed the principal of how to decompose very large problems. Once you divide and conquer a problem, you can put the pieces back together. These principals are now being applied to other industries. Architecture is important.
  • The focus of computer science is on things that work. By definition, once something works, it is no longer artificial intelligence!
  • Amazon Web Services TLA+ is a formal specification language. A specification describes the set of all possible legal behaviors of a system. In this manner, you make assertions about the proper behavior of a program.
  • Repeating what Marissa Mayer said earlier, security is a consequence of software bugs. We need to design software that is more reliable and safer. When you are doing software for a plane, the criteria are simple: safely keep the plane in the air.
  • Ph.D.s should have a shelf life expiration. You have to keep learning in this field, otherwise the half life kills you. The traditional model of get a job and stay with it until you retire is broken.
  • It is very useful to have people who are too young to know that something can’t be done. Often the reasons have changed and need to be reexamined.
  • Successful founders of startups surround themselves with people that have the experience you don’t have.
  • Ivan Sutherland has been doing interesting things with asynchronous clocked logic.
  • At Stanford, by making computer science socially relevant, we caused the female mix to exceed 50%. Our success is directly related to the extent we relate to other disciplines. It is important that any class have at least three women in it because it changes the class social dynamics.
  • What do you do with a buggy artificial intelligence?
  • We have made great progress in image recognition, but we are still lacking basic fundamentals. There is a lot left to be done.
  • General safety is hard to do, specific safety is much easier.
  • To what extent are humans in the loop? I’m more worried about current systems that do what we want them to do.
  • Vine Cerf was asked how the Internet was developed. It started with the military command and control environment. DARPA wanted to integrate it with communications. But in 1973, we didn’t have the ability to change the existing networks. Instead, we created a gateway with edge knowledge. There was no routing in the Internet protocol, it was simply carrying a bunch of bits. DNS (Domain Name Service) happened afterwards. What we developed was a very general, robust architecture.
  • In the 1980s, the field became very empirical. There was a lot of synthesis of math, engineering and various disciplines. When you deal with math and physics, you have to strugglee with reality.
  • I’m very concerned as we scale up the Internet with safety and security. One can just imagine 108,000 refrigerators being used as a bonnet to conduct an attack.
Hector Garcia-Molina moderated a panel with Andrew Ng, Jennifer Widom, Bill Dally, and Pat Hanrahan on Challenges in Computer Science.
  • A major concern is that we have twice as many students over the past six years, but our faculty have remained constant. We have accomplished this by having a separate teaching and lecturer faculty from research faculty. However, one-on-one interaction with faculty has suffered. Our class sizes are at the 1 to 25 ratio, dangerously like UC Berkeley levels. The bottom line is we need more faculty.
  • Another issue is that our faculty is a limited amount of time they can spend with students due to other responsibilities they have. Teaching course work scales well, but the rest doesn’t.
  • It is hard to do research on “data” as compared to the data resources that Google and Facebook have. It is possible to get access to this data from them. Since they are closer to the customer and technology, the key is how to manage this relationship.
  • Over the last 5 years, we have had an exodus of senior faculty to industry and other universities. Part of the reason why they have gone to industry is that their research requires it. Fortunately we have been able to maintain a relationship with them.
  • Great people still want to be here. We have lost people due to significant others—not being able to create a position for them.
  • It is harder to raise money today than it was in the 1980s. More time is spent writing grant proposals, and the grants frequently are very specific and inflexible as to how the money can be spent. China in contrast is very rationale about how they allocate research money.
  • We are asking faculty to focus on high impact work as opposed to the number of papers they produce.
  • It is hard for faculty to live in the Bay Area.
  • Single threaded machines have hit their limit. We need parallel, non-Von Neuman machines. We need a simple parallel computer language to implement these systems.
  • We need tools that help do optimization.
  • Faculty are not able to go to Washington D.C. to lobby. While absolute funding levels are going up slightly, small grants are very constrained.
  • There is a lot of industry funding, but it tends to be year to year.
  • What should be the initial programming language for students? Pascal, Java and Python have huge class library vocabularies. This gets in the way of learning how to program, debug and troubleshoot problems.
  • Donald Knuth asked a question regarding the deemphasis of the central core: Are we focussing on exploring or tilling the soil? Is the central core still useful, or are new applications of computer science where students should focus? It was responded that while you have to know the particular case, the central core has to be trimmed to make room for applications in order to keep computer science a four year degree.
John Hennessy, president of Stanford University provided the closing address:
When computer science got started fifty years ago, you could go to any lecture and understand what was going on. That is no longer the case.
  • Computer science is an incredible discipline, addressing things like the P vs. NP problem.
  • It requires a particularly unique mindset to attacking problems.
  • We are blessed by advances in the areas of computer, storage and communications. We discovered that we only needed 10**100 more processors for artificial intelligence to work. What was far out, ten years later became commonplace in PCs.
  • We hired key faculty: John McCarthy (LISP), Don Knuth (Art of Computer Programming books), Vint Cerf (Internet), Forest Baskin, Ed Feigenbaum (DENDRAL project for computing molecular structure), Mendel Rosenblum (VMware), and Luis Trab Pardo (laser printer controllers). Our emphasis was on getting home runs, not bunts.
  • Stanford has had a willingness to be on the cutting edge and try new things. The SAIL timeshare software resulted the undergraduate LOTS (low overhead timeshare system). The SAIL datadisk that provided low cost video terminals using a head per track disk drive. We were the first college to have a laser printer on campus, that was used with the TeX software for typesetting books, in particular, Donald Knuth’s Art of Computer Programming books.
  • Stanford has had an interest in seeing things leave the campus. Our students have gone on to do incredible things in starting companies such as Yahoo, Google, Sun and Imagen. Yahoo was taking too much of the campus Internet bandwidth, so they had to go elsewhere!
  • We have a great relationship with industry. It has given us access to data and technology. It started with IBM and Bell Labs, continued with DEC, and today with Google and Facebook.
  • We have created great educational programs. We grew both the master and undergraduate CS programs. We developed CS+X with the humanities departments.
  • We have been willing to grow. We started with 15 faculty in the late 1970s and early 1980s. By 1985 we were up to 40 faculty. But in reality, we need to have 50 to 60 faculty.
  • Information Technology has become part of everything. We need to be bold, hire great people and engage in creative destruction.